• Title/Summary/Keyword: 정보처리기술

Search Result 13,602, Processing Time 0.047 seconds

The Demand Analysis of Water Purification of Groundwater for the Horticultural Water Supply (시설원예 용수 공급을 위한 지하수 정수 요구도 분석)

  • Lee, Taeseok;Son, Jinkwan;Jin, Yujeong;Lee, Donggwan;Jang, Jaekyung;Paek, Yee;Lim, Ryugap
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.12
    • /
    • pp.510-523
    • /
    • 2020
  • This study analyzed groundwater quality in hydroponic cultivation facilities. Through this study, the possibility of groundwater use was evaluated according to the quality of the groundwater for hydroponic cultivation facilities. Good groundwater quality, on average, is pH 6.61, EC 0.27 dS/m, NO3-N 7.64 mg/L, NH4+-N 0.80 mg/L, PO4-P 0.09 mg/L, K+ 6.26 mg/L, Ca2+ 18.57 mg/L, Mg2+ 4.38 mg/L, Na+ 20.85 mg/L, etc. All of these satisfy the water quality standard for raw water in nutrient cultivation. But in the case of farmers experiencing problems with groundwater quality, most of the items exceeded the water quality standard. As a result of the analysis, it was judged that purifying groundwater of unsuitable quality for crop cultivation, and using it as raw water, was effective in terms of water quality and soil purification. If an agricultural water purification system is constructed based on the results of this study, it is judged that the design will be easy because the items to be treated can be estimated. If a purification system is established, it can use groundwater directly in the facility and for horticulture. These study results will be available for use in sustainable agriculture and environments.

A Study on Case for Localization of Korean Enterprises in India (인도 진출 한국기업의 현지화에 관한 사례 연구)

  • Seo, Min-Kyo;Kim, Hee-Jun
    • International Commerce and Information Review
    • /
    • v.16 no.4
    • /
    • pp.409-437
    • /
    • 2014
  • The purpose of this study is to present the specific ways of successful localization by analyzing the success and failures case for localization within the framework of the strategic models through a theoretical background and strategic models of localization. The strategic models of localization are divided by management aspects such as the localization of product and sourcing, the localization of human resources, the localization of marketing, the localization of R&D, harmony with a local community and delegation of authority between headquarters and local subsidiaries. The results, by comparing and analyzing the success and failures case for localization of individual companies operating in India, indicate that in terms of localization of product and sourcing, there are successful companies which procure a components locally and produce a suitable model which local consumers prefer and the failed companies which can not meet local consumers' needs. In case of localization of human resources, most companies recognize the importance of this portion and make use of superior human resource aggressively through a related education. In case of localization of marketing, It is found that the successful companies perform pre-market research & management and build a effective marketing skills & after service network and select local business partner which has a technical skills and carry out a business activities, customer support, complaint handling with their own organization. In terms of localization of R&D, the successful major companies establish and operate R&D center to promote a suitable model for local customers. In part of harmony with a local community, it shows that companies which made a successful localization understand the cultural environment and contribute to the community through CSR. In aspect of delegation of authority between headquarters and local subsidiaries, it is found that most of Korean companies are very weak for this part. there is a tendency to be determined by the head office rather than local subsidiaries. Implication of this thesis is that Korean enterprises in India should carry forward localization of products and components, foster of local human resource who recognize management and system of company and take part in voluntary market strategy decision, wholly owned subsidiary, establishment and operation of R & D center, understanding of local culture and system, corporate social responsibility, autonomy in management.

  • PDF

A Study on Brand Identity of TV Programs in the Digital Culture - Focusing on the comparative research of current issue programs, and development - (디지털 문화에서 TV 방송의 브랜드 아이덴티티 연구 -시사 교양 프로그램의 사례비교 및 개발을 중심으로-)

  • Jeong, Bong-Keum;Chang, Dong-Ryun
    • Archives of design research
    • /
    • v.18 no.4 s.62
    • /
    • pp.53-64
    • /
    • 2005
  • The emergence of new communication media, digital, is something of a wonder, as well as a cultural tension. The industrial technologies that dramatically expand human abilities are being developed much faster than the speed of adaptation by humans. Without an exception, it creates new contents and form of the culture by shaking the very foundation of the notion about human beings. Korean broadcasting environment has stepped into the era of multi-media, multi-channel as the digital technology separated the media into network, cable, satellite and internet. In this digital culture, broadcasting, as a medium of information delivering and communication, has bigger influence than ever. Such changes in broadcasting environment turned the TV viewers into new consumers who participate and play the main role in active communication by choosing and using the media. This study is trying to systemize the question about the core identity of broadcasting through brand as the consumers stand in the center of broadcasting with the power to select channel. The story schema theory can be applied as a cognitive psychological tool to approach the active consumers in order to explain the cognitive processes that are related to information processing. It is a design with stories, which comes up as a case of a brand's story telling. The range of this study covers the current issue and educational programs in network TV during the period of May and August of year 2005. The cases of Korean and foreign programs were compared by the station each program is broadcasted. This study concludes that it is important to take the channel identity into the consideration in the brand strategy of each program. Especially, the leading programs of a station must not be treated as a separate program that has nothing to do with the station's identity. They must be treated to include the contents and form that builds the identity of the channel. Also, this study reconfirmed that building a brand of the anchor person can play as an important factor in the identity of the program's brand.

  • PDF

A Variable Latency Goldschmidt's Floating Point Number Square Root Computation (가변 시간 골드스미트 부동소수점 제곱근 계산기)

  • Kim, Sung-Gi;Song, Hong-Bok;Cho, Gyeong-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.1
    • /
    • pp.188-198
    • /
    • 2005
  • The Goldschmidt iterative algorithm for finding a floating point square root calculated it by performing a fixed number of multiplications. In this paper, a variable latency Goldschmidt's square root algorithm is proposed, that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the square root of a floating point number F, the algorithm repeats the following operations: $R_i=\frac{3-e_r-X_i}{2},\;X_{i+1}=X_i{\times}R^2_i,\;Y_{i+1}=Y_i{\times}R_i,\;i{\in}\{{0,1,2,{\ldots},n-1} }}'$with the initial value is $'\;X_0=Y_0=T^2{\times}F,\;T=\frac{1}{\sqrt {F}}+e_t\;'$. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than $'e_r=2^{-p}'$. The value of p is 28 for the single precision floating point, and 58 for the doubel precision floating point. Let $'X_i=1{\pm}e_i'$, there is $'\;X_{i+1}=1-e_{i+1},\;where\;'\;e_{i+1}<\frac{3e^2_i}{4}{\mp}\frac{e^3_i}{4}+4e_{r}'$. If '|X_i-1|<2^{\frac{-p+2}{2}}\;'$ is true, $'\;e_{i+1}<8e_r\;'$ is less than the smallest number which is representable by floating point number. So, $\sqrt{F}$ is approximate to $'\;\frac{Y_{i+1}}{T}\;'$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal square root tables ($T=\frac{1}{\sqrt{F}}+e_i$) with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a square root unit. Also, it can be used to construct optimized approximate reciprocal square root tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc.

A Variable Latency Goldschmidt's Floating Point Number Divider (가변 시간 골드스미트 부동소수점 나눗셈기)

  • Kim Sung-Gi;Song Hong-Bok;Cho Gyeong-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.2
    • /
    • pp.380-389
    • /
    • 2005
  • The Goldschmidt iterative algorithm for a floating point divide calculates it by performing a fixed number of multiplications. In this paper, a variable latency Goldschmidt's divide algorithm is proposed, that performs multiplications a variable number of times until the error becomes smaller than a given value. To calculate a floating point divide '$\frac{N}{F}$', multifly '$T=\frac{1}{F}+e_t$' to the denominator and the nominator, then it becomes ’$\frac{TN}{TF}=\frac{N_0}{F_0}$'. And the algorithm repeats the following operations: ’$R_i=(2-e_r-F_i),\;N_{i+1}=N_i{\ast}R_i,\;F_{i+1}=F_i{\ast}R_i$, i$\in${0,1,...n-1}'. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than ‘$e_r=2^{-p}$'. The value of p is 29 for the single precision floating point, and 59 for the double precision floating point. Let ’$F_i=1+e_i$', there is $F_{i+1}=1-e_{i+1},\;e_{i+1}',\;where\;e_{i+1}, If '$[F_i-1]<2^{\frac{-p+3}{2}}$ is true, ’$e_{i+1}<16e_r$' is less than the smallest number which is representable by floating point number. So, ‘$N_{i+1}$ is approximate to ‘$\frac{N}{F}$'. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal tables ($T=\frac{1}{F}+e_t$) with varying sizes. 1'he superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a divider. Also, it can be used to construct optimized approximate reciprocal tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc

KoFlux's Progress: Background, Status and Direction (KoFlux 역정: 배경, 현황 및 향방)

  • Kwon, Hyo-Jung;Kim, Joon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.12 no.4
    • /
    • pp.241-263
    • /
    • 2010
  • KoFlux is a Korean network of micrometeorological tower sites that use eddy covariance methods to monitor the cycles of energy, water, and carbon dioxide between the atmosphere and the key terrestrial ecosystems in Korea. KoFlux embraces the mission of AsiaFlux, i.e. to bring Asia's key ecosystems under observation to ensure quality and sustainability of life on earth. The main purposes of KoFlux are to provide (1) an infrastructure to monitor, compile, archive and distribute data for the science community and (2) a forum and short courses for the application and distribution of knowledge and data between scientists including practitioners. The KoFlux community pursues the vision of AsiaFlux, i.e., "thinking community, learning frontiers" by creating information and knowledge of ecosystem science on carbon, water and energy exchanges in key terrestrial ecosystems in Asia, by promoting multidisciplinary cooperations and integration of scientific researches and practices, and by providing the local communities with sustainable ecosystem services. Currently, KoFlux has seven sites in key terrestrial ecosystems (i.e., five sites in Korea and two sites in the Arctic and Antarctic). KoFlux has systemized a standardized data processing based on scrutiny of the data observed from these ecosystems and synthesized the processed data for constructing database for further uses with open access. Through publications, workshops, and training courses on a regular basis, KoFlux has provided an agora for building networks, exchanging information among flux measurement and modelling experts, and educating scientists in flux measurement and data analysis. Despite such persistent initiatives, the collaborative networking is still limited within the KoFlux community. In order to break the walls between different disciplines and boost up partnership and ownership of the network, KoFlux will be housed in the National Center for Agro-Meteorology (NCAM) at Seoul National University in 2011 and provide several core services of NCAM. Such concerted efforts will facilitate the augmentation of the current monitoring network, the education of the next-generation scientists, and the provision of sustainable ecosystem services to our society.

Evaluation of Preference by Bukhansan Dulegil Course Using Sentiment Analysis of Blog Data (블로그 데이터 감성분석을 통한 북한산둘레길 구간별 선호도 평가)

  • Lee, Sung-Hee;Son, Yong-Hoon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.49 no.3
    • /
    • pp.1-10
    • /
    • 2021
  • This study aimed to evaluate preferences of Bukhansan dulegil using sentiment analysis, a natural language processing technique, to derive preferred and non-preferred factors. Therefore, we collected blog articles written in 2019 and produced sentimental scores by the derivation of positive and negative words in the texts for 21 dulegil courses. Then, content analysis was conducted to determine which factors led visitors to prefer or dislike each course. In blogs written about Bukhansan dulegil, positive words appeared in approximately 73% of the content, and the percentage of positive documents was significantly higher than that of negative documents for each course. Through this, it can be seen that visitors generally had positive sentiments toward Bukhansan dulegil. Nevertheless, according to the sentiment score analysis, all 21 dulegil courses belonged to both the preferred and non-preferred courses. Among courses, visitors preferred less difficult courses, in which they could walk without a burden, and in which various landscape elements (visual, auditory, olfactory, etc.) were harmonious yet distinct. Furthermore, they preferred courses with various landscapes and landscape sequences. Additionally, visitors appreciated the presence of viewpoints, such as observation decks, as a significant factor and preferred courses with excellent accessibility and information provisions, such as information boards. Conversely, the dissatisfaction with the dulegil courses was due to noise caused by adjacent roads, excessive urban areas, and the inequality or difficulty of the course which was primarily attributed to insufficient information on the landscape or section of the course. The results of this study can serve not only serve as a guide in national parks but also in the management of nearby forest green areas to formulate a plan to repair and improve dulegil. Further, the sentiment analysis used in this study is meaningful in that it can continuously monitor actual users' responses towards natural areas. However, since it was evaluated based on a predefined sentiment dictionary, continuous updates are needed. Additionally, since there is a tendency to share positive content rather than negative views due to the nature of social media, it is necessary to compare and review the results of analysis, such as with on-site surveys.

Nonlinear Vector Alignment Methodology for Mapping Domain-Specific Terminology into General Space (전문어의 범용 공간 매핑을 위한 비선형 벡터 정렬 방법론)

  • Kim, Junwoo;Yoon, Byungho;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.127-146
    • /
    • 2022
  • Recently, as word embedding has shown excellent performance in various tasks of deep learning-based natural language processing, researches on the advancement and application of word, sentence, and document embedding are being actively conducted. Among them, cross-language transfer, which enables semantic exchange between different languages, is growing simultaneously with the development of embedding models. Academia's interests in vector alignment are growing with the expectation that it can be applied to various embedding-based analysis. In particular, vector alignment is expected to be applied to mapping between specialized domains and generalized domains. In other words, it is expected that it will be possible to map the vocabulary of specialized fields such as R&D, medicine, and law into the space of the pre-trained language model learned with huge volume of general-purpose documents, or provide a clue for mapping vocabulary between mutually different specialized fields. However, since linear-based vector alignment which has been mainly studied in academia basically assumes statistical linearity, it tends to simplify the vector space. This essentially assumes that different types of vector spaces are geometrically similar, which yields a limitation that it causes inevitable distortion in the alignment process. To overcome this limitation, we propose a deep learning-based vector alignment methodology that effectively learns the nonlinearity of data. The proposed methodology consists of sequential learning of a skip-connected autoencoder and a regression model to align the specialized word embedding expressed in each space to the general embedding space. Finally, through the inference of the two trained models, the specialized vocabulary can be aligned in the general space. To verify the performance of the proposed methodology, an experiment was performed on a total of 77,578 documents in the field of 'health care' among national R&D tasks performed from 2011 to 2020. As a result, it was confirmed that the proposed methodology showed superior performance in terms of cosine similarity compared to the existing linear vector alignment.

Evaluations of Spectral Analysis of in vitro 2D-COSY and 2D-NOESY on Human Brain Metabolites (인체 뇌 대사물질에서의 In vitro 2D-COSY와 2D-NOESY 스펙트럼 분석 평가)

  • Choe, Bo-Young;Woo, Dong-Cheol;Kim, Sang-Young;Choi, Chi-Bong;Lee, Sung-Im;Kim, Eun-Hee;Hong, Kwan-Soo;Jeon, Young-Ho;Cheong, Chae-Joon;Kim, Sang-Soo;Lim, Hyang-Sook
    • Investigative Magnetic Resonance Imaging
    • /
    • v.12 no.1
    • /
    • pp.8-19
    • /
    • 2008
  • Purpose : To investigate the 3-bond and spatial connectivity of human brain metabolites by scalar coupling and dipolar nuclear Overhauser effect/enhancement (NOE) interaction through 2D- correlation spectroscopy (COSY) and 2D- NOE spectroscopy (NOESY) techniques. Materials and Methods : All 2D experiments were performed on Bruker Avance 500 (11.8 T) with the zshield gradient triple resonance cryoprobe at 298 K. Human brain metabolites were prepared with 10% $D_2O$. Two-dimensional spectra with 2048 data points contains 320 free induction decay (FID) averaging. Repetition delay was 2 sec. The Top Spin 2.0 software was used for post-processing. Total 7 metabolites such as N-acetyl aspartate (NAA), creatine (Cr), choline (Cho), lutamine (Gln), glutamate (Glu), myo-inositol (Ins), and lactate (Lac) were included for major target metabolites. Results : Symmetrical 2D-COSY and 2D-NOESY pectra were successfully acquired: COSY cross peaks were observed in the only 1.0-4.5 ppm, however, NOESY cross peaks were observed in the 1.0-4.5 ppm and 7.9 ppm. From the result of the 2-D COSY data, cross peaks between the methyl protons ($CH_3$(3)) at 1.33 ppm and methine proton (CH(2)) at 4.11 ppm were observed in Lac. Cross peaks between the methylene protons (CH2(3,$H{\alpha}$)) at 2.50ppm and methylene protons ($CH_2$,(3,$H_B$)) at 2.70 ppm were observed in NAA. Cross peaks between the methine proton (CH(5)) at 3.27 ppm and the methine proton (CH(4,6)) at 3.59 ppm, between the methine proton (CH(1,3)) at 3.53 ppm and methine proton (CH(4,6)) at 3.59 ppm, and between the methine proton (CH(1,3)) at 3.53 ppm and methine proton (CH(2)) at 4.05 ppm were observed in Ins. From the result of 2-D NOESY data, cross peaks between the NH proton at 8.00 ppm and methyl protons ($CH_3$) were observed in NAA. Cross peaks between the methyl protons ($CH_3$(3)) at 1.33 ppm and methine proton (CH(2)) at 4.11 ppm were observed in Lac. Cross peaks between the methyl protons (CH3) at 3.03 ppm and methylene protons (CH2) at 3.93 ppm were observed in Cr. Cross peaks between the methylene protons ($CH_2$(3)) at 2.11 ppm and methylene protons ($CH_2$(4)) at 2.35 ppm, and between the methylene protons($CH_2$ (3)) at 2.11 ppm and methine proton (CH(2)) at 3.76 ppm were observed in Glu. Cross peaks between the methylene protons (CH2 (3)) at 2.14 ppm and methine proton (CH(2)) at 3.79 ppm were observed in Gln. Cross peaks between the methine proton (CH(5)) at 3.27 ppm and the methine proton (CH(4,6)) at 3.59 ppm, and between the methine proton (CH(1,3)) at 3.53 ppm and methine proton (CH(2)) at 4.05 ppm were observed in Ins. Conclusion : The present study demonstrated that in vitro 2D-COSY and NOESY represented the 3-bond and spatial connectivity of human brain metabolites by scalar coupling and dipolar NOE interaction. This study could aid in better understanding the interactions between human brain metabolites in vivo 2DCOSY study.

  • PDF

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.