• Title/Summary/Keyword: software change

Search Result 1,390, Processing Time 0.033 seconds

Estimate and Environmental Assessment of Greenhouse Gas(GHG) Emissions and Sludge Emissions in Wastewater Treatment Processes for Climate Change (기후변화를 고려한 하수처리공법별 온실가스 및 슬러지 배출량 산정 및 환경성 평가)

  • Oh, Tae-Seok;Kim, Min-Jeong;Lim, Jung-Jin;Kim, Yong-Su;Yoo, Chang-Kyoo
    • Korean Chemical Engineering Research
    • /
    • v.49 no.2
    • /
    • pp.187-194
    • /
    • 2011
  • In compliance with an international law about the ocean dumping of the sludge, the proper sewage treatment process which occurs from the wastewater treatment process has been becoming problem. Generally the sewage and the sludge are controlled from anaerobic condition when the sewage is treated and land filled, where the methane$(CH_{4})$ and the nitrous oxide $(N_{2}O)$ from this process are discharged. Because these gases have been known as one of the responsible gases for global warming, the wastewater treatment process is become known as emission sources of green house gases(GHG). This study is to suggest a new approach of estimate and environmental assessment of greenhouse gas emissions and sludge emissions from wastewater treatment processes. It was carried out by calculating the total amounts of GHG emitted from biological wastewater treatment process and the amount of the sludgegenerated from the processes. Four major biological wastewater treatment processes which are Anaerobic/Anoxic/Oxidation$(A_{2}O)$, Bardenpho, Virginia Initiative Plant(VIP), University of Cape Town(UCT)are used and GPS-X software is used to model four processes. Based on the modeling result of four processes, the amounts of GHG emissions and the sludge produced from each process are calculated by Intergovernmental Panel on Climate Change(IPCC) 2006 guideline report. GHG emissions for water as well as sludge treatment processes are calculated for environmental assessment has been done on the scenario of various sludge treatments, such as composting, incineration and reclamation and each scenario is compared by using a unified index of the economic and environmental assessment. It was found that Bardenpho process among these processes shows a best process that can emit minimum amount of GHG with lowest impact on environment and composting emits the minimum amount of GHG for sludge treatment.

The effect of eco-friendly clothing teaching using Future Problem Solving Program on cultivating creative character (미래문제해결프로그램(FPSP)을 적용한 친환경 의생활 수업이 창의.인성 함양에 미치는 영향)

  • Lee, Seung-Hae;Lee, Hye-Ja
    • Journal of Korean Home Economics Education Association
    • /
    • v.24 no.3
    • /
    • pp.143-173
    • /
    • 2012
  • We investigated environmental problems related to clothing, and attempted their practical solutions using Future Problem Solving Program in order to cultivate the creative character in teenagers. We applied "teaching and learning plans" to seventy-seven the first graders of high school students in 2 classes in Gyeonggi-do, one hour per day for 3 weeks, from August 23 to September 8 2011. Statistical analyses were performed using the SPSS for Windows software(version 17.0). Mean differences in results between pretest and posttest were evaluated using Student's t-test. We selected 'production of fabrics, production of clothing, disposal and recycling of clothing and washing of clothing' as the learning theme in educational content factors of 'clothing culture in consideration of environment'. And we developed thirteenth teaching and learning plans and educational materials including 4 problems, 2 worksheets, 10 team worksheets, 7 video materials and 7 Power Point materials using Future Problem Solving Program(FPSP). The measurements of fluency, flexibility, originality and problem-solving ability are significantly improved. The level of creativity in the items of fluency, flexibility and originality, in particular, exhibited marked improvement, 'below-average' to 'above-average', regardless of academic records and gender. Problem-solving ability in female students was more effective than that of male, but it showed no significant correlation with academic records. The analysis of character-change showed the highest improvement in the awareness on the protection of environment, the character factor in the educational contents. Personalities, confidence, consideration and cooperation in learning method of FPSP also exhibited a significant improvement. But character-change did not correlate with academic records or gender. In the present study, we found that home economics has a positive effect on cultivating creative character. When we selectively and properly apply a course of creative problem-solving of FPSP and a course of creative output to students, we can increase their ability to solve problems, cultivate their creative character and further enhance their interest on home economics.

  • PDF

The Stock Portfolio Recommendation System based on the Correlation between the Stock Message Boards and the Stock Market (인터넷 주식 토론방 게시물과 주식시장의 상관관계 분석을 통한 투자 종목 선정 시스템)

  • Lee, Yun-Jung;Kim, Gun-Woo;Woo, Gyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.10
    • /
    • pp.441-450
    • /
    • 2014
  • The stock market is constantly changing and sometimes the stock prices unaccountably plummet or surge. So, the stock market is recognized as a complex system and the change on the stock prices is unpredictable. Recently, many researchers try to understand the stock market as the network among individual stocks and to find a clue about the change of the stock prices from big data being created in real time from Internet. We focus on the correlation between the stock prices and the human interactions in Internet especially in the stock message boards. To uncover this correlation, we collected and investigated the articles concerning with 57 target companies, members of KOSPI200. From the analysis result, we found that there is no significant correlation between the stock prices and the article volume, but the strength of correlation between the article volume and the stock prices is relevant to the stock return. We propose a new method for recommending stock portfolio base on the result of our analysis. According to the simulated investment test using the article data from the stock message boards in 'Daum' portal site, the returns of our portfolio is about 1.55% per month, which is about 0.72% and 1.21% higher than that of the Markowitz's efficient portfolio and that of the KOSPI average respectively. Also, the case using the data from 'Naver' portal site, the stock returns of our proposed portfolio is about 0.90%, which is 0.35%, 0.40%, and 0.58% higher than those of our previous portfolio, Markowitz's efficient portfolio, and KOSPI average respectively. This study presents that collective human behavior on Internet stock message board can be much helpful to understand the stock market and the correlation between the stock price and the collective human behavior can be used to invest in stocks.

The Effects of Discrepancy in Reconstruction Algorithm between Patient Data and Normal Database in AutoQuant Evaluation: Focusing on Half-Time Scan Algorithm in Myocardial SPECT (심근 관류 스펙트에서 Half-Time Scan과 새로운 재구성법이 적용된 정상군 데이터를 기반으로 한 정량적 분석 결과의 차이 비교)

  • Lee, Hyung-Jin;Do, Yong-Ho;Cho, Seong-Wook;Kim, Jin-Eui
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.122-126
    • /
    • 2014
  • Purpose: The new reconstruction algorithms (NRA) provided by vendor aim to shorten the acquisition scan time. Whereas depending on the installed version AutoQuant program used for myocardial SPECT quantitative analysis did not contain the normal data that NRA is applied. Thus, the purpose of this paper is to compare the results according to AutoQuant versions in myocardial SPECT applied NRA and half-time scan (HT). Materials and Methods: Rest Tl and stress MIBI data of total 80 (40 men, 40 women) patients were gathered. Data were applied HT acquisition and ASTONISH (Philips) software which is NRA. Modified autoquant of SNUH and old version of AutoQuant (full-time scan) provided by company were compared. Comparison groups were classified as coronary artery disease (CAD), 24 hrs delay and almost normal patients who have a simple pain patient. Perfusion distribution aspect, summed stress score (SSS), summed rest score (SRS), extent and total perfusion deficit (TPD) of each 25 patient who have above diseases were compared and evaluated. Results: The case of CAD, when using re-edited AutoQuant (HT) SSS and SRS showed about 30% reduction (P<0.0001), Extent showed about 38% reduction and TPD showed about 30% reduction in the tendency (P<0.0001). In the score of the perfusion, especially on the part of infero-medium, infero-apical, lateral-medium and lateral-apical regions were the biggest change. The case of the 24 hrs delay patient SRS (P=0.042), Extent (P=0.018) and TPD (P=0.0024) showed about 13-18% reduction. And the case of simple pain patient, comparison of 4 results showed about 5-7% reduction. Conclusion: This study was started based on expectation that results could be affected by normal patient data. Normal patient data is possible to change by race and gender. It was proved that combination of new reconstruction algorithm for reducing scan time and analysis program according to scan protocol with NRA could also be affected to results. Clinical usefulness of gated myocardial SPECT is possibly increased if each hospital properly collects normal patient data for their scan acquisition protocol.

  • PDF

Study on the Development of Measuring System for Fermentation Degree of Liquid Swine Manure Using Visible Ray (가시광선을 이용한 돈분뇨 액비 부숙도 측정장치 개발에 관한 연구)

  • Choi, D.Y.;Kwag, J.H.;Park, K.H.;Song, J.I.;Kim, J.H.;Kang, H.S.;Han, C.B.;Choi, S.W.;Lee, C.S.
    • Journal of Animal Environmental Science
    • /
    • v.16 no.3
    • /
    • pp.227-236
    • /
    • 2010
  • This study was conducted to develop an measuring system and method for fermentation degree of liquid swine manure by visible ray. The constituent changes of liquid swine manure were examined. pH gradually increased with time, but EC gradually decreased. Malodor strength decreased gradually with aeration treatment with time. Control needed more time to decrease malodor strength than aeration treatment. In aeration treatment, there was no germination of seeds (radish, chinese cabbage) up to 6 weeks and germination rate at 15th week was over 50%. However, in control, there was no germination up to end of experiment. Circular chromatography method showed that there was change after 10th week in aeration treatment but there was no change up to end of experiment in control. As a result, the fermentation degree of liquid swine manure would have relations among pH, EC, germination rate, malodor concentration, and reaction of circular chromatography. The simple analytical instrument for liquid swine manure consisted of a tungsten halogen and deuterium lamp for light source, a sample holder, a quartz cell, spectrometer for spectrum analyzer, a malodour measuring device, a software, etc. Results showed that the simple analytical instrument that was developed can approximately predict the fermentation degree of liquid swine manure by visible ray. Generally, the experiment proved that the simple analytical instrument was reliable, feasible and practical for analyzing the fermentation degree of liquid swine manure.

Analysis of Light Transmittance according to the Array Structure of Collagen Fibers Constituting the Corneal Stroma (각막실질 콜라겐섬유의 배열구조에 따른 광투과율 분석)

  • Lee, Myoung-Hee;Kim, Young-Chul
    • The Korean Journal of Vision Science
    • /
    • v.20 no.4
    • /
    • pp.561-568
    • /
    • 2018
  • Purpose : The size and regular array of the collagen fibers in the corneal stroma have very close correlation with transparency. Simulation was carried out to investigate the change of light transmittance according to the array structure and collagen fiber layer thickness. Methods : The collagen fibers in corneal stroma were arranged in regular hexagonal, hexagonal, square and random shapes with OptiFDTD simulation software, and the light transmittance was analyzed. In square array, the light transmittance according to the density change was confirmed by when the number of collagen fibers in the simulation space was the same and the light transmittance was examined when the number and density of collagen fibers were changed. Results : When the number of collagen fibers is the same, the density becomes smaller and the thickness of the fibrous layer becomes thicker in order of arrangement of square, regular hexagonal, random and hexagonal. As a result of measuring the light transmittance by changing the array structure, the light transmittance measured at the detector at the same position was almost similar regardless of the array structure. In the detectors D0, D1, D2 and D3, the maximum transmittance is shown in square, hexagonal and square, regular hexagonal and regular hexagonal array structure, and the minimum transmittance is hexagonal, random, hexagonal and square, and square array structure. However, the difference between the maximum transmittance and the minimum transmittance was almost the same within 1%. When the number of collagen fibers was the same, the light transmittance of the rectangular array structure decreased with increasing fiber layer thickness. And as the thickness increased, the light transmittance decreased more when the number of collagen fibers decreased. Conclusion : Even though the collagen array structure changed, the light transmittance is almost similar regardless of the arrangement structure. However, as the array structure was changed, the thickness of the collagen fiber layer changed, and as the thickness increased, the light transmittance decreased. In other words, the transparency of the corneal stroma is more closely related to the thickness of the fibrous layer than the array of collagen fibers.

Evaluation of Error Factors in Quantitative Analysis of Lymphoscintigraphy (Lymphoscintigraphy의 정량분석 시 오류 요인에 관한 평가)

  • Yeon, Joon-Ho;Kim, Soo-Yung;Choi, Sung-Ook;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.2
    • /
    • pp.76-82
    • /
    • 2011
  • Purpose: Lymphoscintigraphy is absolutely being used standard examination in lymphatic diagnosis, evaluation after treatment, and it is useful for lymphedema to plan therapy. In case of lymphoscintigraphy of lower-extremity lymphedema, it had an effect on results if patients had not pose same position on the examination of 1 min, 1 hour and 2 hours after injection. So we'll study the methods to improve confidence with minimized quantitative analysis errors by influence factors. Materials and Methods: Being used the Infinia of GE Co. we injected $^{99m}Tc$-phytate 37 MBq (1.0 mCi) 4 sylinges into 40 people's feet hypodermically from June to August 2010 in Samsung Medical Center. After we acquired images of fixed and unfixed condition, we confirmed the count values change by attenuation of soft tissue and bone according to different feet position. And we estimated 5 times increasing 2 cm of distance between $^{99m}Tc$ point source and detector each time to check counts difference according to distance change by different feet position. Finally, we compared 1 and 6 min lymphoscintigraphy images with same position to check the effect of quantitative analysis results owing to difference of amounts of movement of the $^{99m}Tc$-phytate in the lymphatic duct. Results: Percentage difference regarding error values showed minimum 2.7% and maximum 25.8% when comparing fixed and unfixed feet position of lymphoscintigraphy examination at 1 min after injection. And count values according to distance were 173,661 (2 cm), 172,095 (4 cm), 170,996 (6 cm), 167,677 (8 cm), 169,208 counts (10 cm) which distance was increased interval of 2 cm and basal value was mean 176,587 counts, and percentage difference values were not over 2.5% such as 1.27, 1.79, 2.04, 2.42, 2.35%. Also, Assessment results about amounts of movement in lymphatic duct within 6 min until scanning after injection showed minimum 0.15%, and maximum 2.3% which were amounts of movement. We can recognize that error values represent over 20% due to only attenuation of soft tissue and bone except for distance difference (2.42%) and amounts of movement in lymphatic duct (2.3%). Conclusion: It was show that if same patients posed different feet position on the examination of 1 min, 1 hour and 2 hours after injection in the lymphoscintigraphy which is evaluating lymphatic flow of patients with lymphedema and analyzing amount of intake by lymphatic system, maximum error value represented 25.8% due to attenuation of soft tissue and bone, and PASW (Predictive Analytics Software) showed that fixed and unfixed feet position was different each other. And difference of distance between detector and feet and change of count values by difference of examination beginning time after injection influence on quantitative analysis results partially. Therefore, we'll make an effort to fix feet position and make the most of fixing board in lymphoscintigraphy with quantitative analysis.

  • PDF

The Precision Test Based on States of Bone Mineral Density (골밀도 상태에 따른 검사자의 재현성 평가)

  • Yoo, Jae-Sook;Kim, Eun-Hye;Kim, Ho-Seong;Shin, Sang-Ki;Cho, Si-Man
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.67-72
    • /
    • 2009
  • Purpose: ISCD (International Society for Clinical Densitometry) requests that users perform mandatory Precision test to raise their quality even though there is no recommendation about patient selection for the test. Thus, we investigated the effect on precision test by measuring reproducibility of 3 bone density groups (normal, osteopenia, osteoporosis). Materials and Methods: 4 users performed precision test with 420 patients (age: $57.8{\pm}9.02$) for BMD in Asan Medical Center (JAN-2008 ~ JUN-2008). In first group (A), 4 users selected 30 patient respectively regardless of bone density condition and measured 2 part (L-spine, femur) in twice. In second group (B), 4 users measured bone density of 10 patients respectively in the same manner of first group (A) users but dividing patient into 3 stages (normal, osteopenia, osteoporosis). In third group (C), 2 users measured 30 patients respectively in the same manner of first group (A) users considering bone density condition. We used GE Lunar Prodigy Advance (Encore. V11.4) and analyzed the result by comparing %CV to LSC using precision tool from ISCD. Check back was done using SPSS. Results: In group A, the %CV calculated by 4 users (a, b, c, d) were 1.16, 1.01, 1.19, 0.65 g/$cm^2$ in L-spine and 0.69, 0.58, 0.97, 0.47 g/$cm^2$ in femur. In group B, the %CV calculated by 4 users (a, b, c, d) were 1.01, 1.19, 0.83, 1.37 g/$cm^2$ in L-spine and 1.03, 0.54, 0.69, 0.58 g/$cm^2$ in femur. When comparing results (group A, B), we found no considerable differences. In group C, the user_1's %CV of normal, osteopenia and osteoporosis were 1.26, 0.94, 0.94 g/$cm^2$ in L-spine and 0.94, 0.79, 1.01 g/$cm^2$ in femur. And the user_2's %CV were 0.97, 0.83, 0.72 g/$cm^2$ L-spine and 0.65, 0.65, 1.05 g/$cm^2$ in femur. When analyzing the result, we figured out that the difference of reproducibility was almost not found but the differences of two users' several result values have effect on total reproducibility. Conclusions: Precision test is a important factor of bone density follow up. When Machine and user's reproducibility is getting better, it’s useful in clinics because of low range of deviation. Users have to check machine's reproducibility before the test and keep the same mind doing BMD test for patient. In precision test, the difference of measured value is usually found for ROI change caused by patient position. In case of osteoporosis patient, there is difficult to make initial ROI accurately more than normal and osteopenia patient due to lack of bone recognition even though ROI is made automatically by computer software. However, initial ROI is very important and users have to make coherent ROI because we use ROI Copy function in a follow up. In this study, we performed precision test considering bone density condition and found LSC value was stayed within 3%. There was no considerable difference. Thus, patient selection could be done regardless of bone density condition.

  • PDF

Impact of Shortly Acquired IPO Firms on ICT Industry Concentration (ICT 산업분야 신생기업의 IPO 이후 인수합병과 산업 집중도에 관한 연구)

  • Chang, YoungBong;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.51-69
    • /
    • 2020
  • Now, it is a stylized fact that a small number of technology firms such as Apple, Alphabet, Microsoft, Amazon, Facebook and a few others have become larger and dominant players in an industry. Coupled with the rise of these leading firms, we have also observed that a large number of young firms have become an acquisition target in their early IPO stages. This indeed results in a sharp decline in the number of new entries in public exchanges although a series of policy reforms have been promulgated to foster competition through an increase in new entries. Given the observed industry trend in recent decades, a number of studies have reported increased concentration in most developed countries. However, it is less understood as to what caused an increase in industry concentration. In this paper, we uncover the mechanisms by which industries have become concentrated over the last decades by tracing the changes in industry concentration associated with a firm's status change in its early IPO stages. To this end, we put emphasis on the case in which firms are acquired shortly after they went public. Especially, with the transition to digital-based economies, it is imperative for incumbent firms to adapt and keep pace with new ICT and related intelligent systems. For instance, after the acquisition of a young firm equipped with AI-based solutions, an incumbent firm may better respond to a change in customer taste and preference by integrating acquired AI solutions and analytics skills into multiple business processes. Accordingly, it is not unusual for young ICT firms become an attractive acquisition target. To examine the role of M&As involved with young firms in reshaping the level of industry concentration, we identify a firm's status in early post-IPO stages over the sample periods spanning from 1990 to 2016 as follows: i) being delisted, ii) being standalone firms and iii) being acquired. According to our analysis, firms that have conducted IPO since 2000s have been acquired by incumbent firms at a relatively quicker time than those that did IPO in previous generations. We also show a greater acquisition rate for IPO firms in the ICT sector compared with their counterparts in other sectors. Our results based on multinomial logit models suggest that a large number of IPO firms have been acquired in their early post-IPO lives despite their financial soundness. Specifically, we show that IPO firms are likely to be acquired rather than be delisted due to financial distress in early IPO stages when they are more profitable, more mature or less leveraged. For those IPO firms with venture capital backup have also become an acquisition target more frequently. As a larger number of firms are acquired shortly after their IPO, our results show increased concentration. While providing limited evidence on the impact of large incumbent firms in explaining the change in industry concentration, our results show that the large firms' effect on industry concentration are pronounced in the ICT sector. This result possibly captures the current trend that a few tech giants such as Alphabet, Apple and Facebook continue to increase their market share. In addition, compared with the acquisitions of non-ICT firms, the concentration impact of IPO firms in early stages becomes larger when ICT firms are acquired as a target. Our study makes new contributions. To our best knowledge, this is one of a few studies that link a firm's post-IPO status to associated changes in industry concentration. Although some studies have addressed concentration issues, their primary focus was on market power or proprietary software. Contrast to earlier studies, we are able to uncover the mechanism by which industries have become concentrated by placing emphasis on M&As involving young IPO firms. Interestingly, the concentration impact of IPO firm acquisitions are magnified when a large incumbent firms are involved as an acquirer. This leads us to infer the underlying reasons as to why industries have become more concentrated with a favor of large firms in recent decades. Overall, our study sheds new light on the literature by providing a plausible explanation as to why industries have become concentrated.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.