• Title/Summary/Keyword: Digital Filtering

Search Result 610, Processing Time 0.024 seconds

A Study on High-Precision DEM Generation Using ERS-Envisat SAR Cross-Interferometry (ERS-Envisat SAR Cross-Interferomety를 이용한 고정밀 DEM 생성에 관한 연구)

  • Lee, Won-Jin;Jung, Hyung-Sup;Lu, Zhong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.4
    • /
    • pp.431-439
    • /
    • 2010
  • Cross-interferometic synthetic aperture radar (CInSAR) technique from ERS-2 and Envisat images is capable of generating submeter-accuracy digital elevation model (DEM). However, it is very difficult to produce high-quality CInSAR-derived DEM due to the difference in the azimuth and range pixel size between ERS-2 and Envisat images as well as the small height ambiguity of CInSAR interferogram. In this study, we have proposed an efficient method to overcome the problems, produced a high-quality DEM over northern Alaska, and compared the CInSAR-derived DEM with the national elevation dataset (NED) DEM from U.S. Geological Survey. In the proposed method, azimuth common band filtering is applied in the radar raw data processing to mitigate the mis-registation due to the difference in the azimuth and range pixel size, and differential SAR interferogram (DInSAR) is used for reducing the unwrapping error occurred by the high fringe rate of CInSAR interferogram. Using the CInSAR DEM, we have identified and corrected man-made artifacts in the NED DEM. The wave number analysis further confirms that the CInSAR DEM has valid Signal in the high frequency of more than 0.08 radians/m (about 40m) while the NED DEM does not. Our results indicate that the CInSAR DEM is superior to the NED DEM in terms of both height precision and ground resolution.

Filtering & Cridding Algorithms for Multibeam Echo Sounder Data based on Bathymetry (수심에 기반한 멀티빔 음향 측심 필터와 격자 대표값 선정 알고리즘)

  • 박요섭;김학일
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.260-265
    • /
    • 1999
  • 멀티빔 음향측심기(Multibeam Echo Sounder)는 기존의 단빔 음향측심기(Singlebeam Echo Sounder)와 달리 탐사선 진행방향의 수직(Crosstrack)으로 해저면을 주사(Swath)하여, 한 번의 송수신(Ping)으로 다중의 빔 자료 - 수심, 후방산란된 음압(Backscattered Amplitude), 사이드 스캔 소나(Side Scan Sonar) 자료 - 를 취득하는 장비이다. 멀티빔 음향 측심기를 이용한 해저면 탐사의 경우, 수심이 변함에 따라 주사폭(Swath width)이 변화하고, 각 빔의 수평 해상도(Footprint)는 수심과 더불어 빔폭(Beam width)에 의하여 동적으로 변화한다. 멀티빔 음향 측심기는 해저면을 전역탐사 할 수 있을 뿐만 아니라, 연속된 음향 탐사를 통하여 이웃한 핑 사이에 발생하는 전방중첩영역(Endlap)과 이웃 측선(Trackline)을 따라 겹쳐지는 측방중첩영역(Sidelap)의 자료들을 이용하여 멀티 뎀 자료들의 전반적인 정확도 및 신뢰도를 평가할 수 있다. 본 논문은 수로 측량(Hydrographic Survey)에서 사용되는 멀티빔 음향 측심기를 운영하여 얻어진 측심 자료를 처리하는 알고리즘 개발에 관한 연구이다. 본 논문에서는 L3사의 Sea Beam 2100 벌티빔 음향 측심기를 대상으로, 멀티빔의 측심 원리와 해저 지형에 대한 일반적 이해를 통하여 획득된 측심 자료의 통계적 특성을 파악하고, 오측심된 수심 자료를 제거하는 방법을 제안하며, 측심 구간의 대표격자 크기를 결정하는 기준을 제시한다. 또한, 항공원격탐사에서 고도 추정시 사용되고 있는, 평균보간법, 가중평균 보간법과, 본 논문에서 제안하는 격자 대표값 선정 알고리즘(Gridding Algorithms)의 결과를 비교하고, 최종적으로 얻어지는 해저 수치지형모델(DEM, Digital Elevation Model)과 후방산란 영상을 제시한다. 빠른 한지형잔디들이 지표면을 피복하도록 하고 여름의 고온기와 장마시기에는 뿌리전단력이 우수한 이러한 초종들로 지표면이 피복되도록 하는 것이 이상적이라 생각된다. 4. 혼파처리간의 토사유출량을 비교한 결과 토사 유출 억제효과는 한지형과 나지형잔디들의 혼합형(MixtureIII)과 자생처리구(MixtureV), Italian ryegrass와 자생식물의 혼합형(MixtureIV)등에서 비교적 낮은 수치를 토사유출량을 기록하였다. 이러한 결과는 자생식물들이 비록 초기생육속도는 외래도입초종에 떨어지지만 토사유출의 억제효과면에서는 이들 외래초종에 필적할 수 있음을 나타낸다고 할 수 있겠다.중량이 약 115kg/$m^2$정도로 나타났다.소 들(환경의 의미, 사람의 목적과 지식)보다 미학적 경험에 주는 영향이 큰 것으로 나타났으며, 모든 사람들에게 비슷한 미학적 경험을 발생시키는 것 이 밝혀졌다. 다시 말하면 모든 사람들은 그들의 문화적인 국적과 사회적 인 직업의 차이, 목적의 차이, 또한 환경의 의미의 차이에 상관없이 아름다 운 경관(High-beauty landscape)을 주거지나 나들이 장소로서 선호했으며, 아름답다고 평가했다. 반면에, 사람들이 갖고 있는 문화의 차이, 직업의 차 이, 목적의 차이, 그리고 환경의 의미의 차이에 따라 경관의 미학적 평가가 달라진 것으로 나타났다.corner$적 의도에 의한 경관구성의 일면을 확인할수 있지만 엄밀히 생각하여 보면 이러한 예의 경우도 최락의 총체적인 외형은 마찬가지로 $\ulcorner$순응$\lrcorner$의 범위를 벗어나지 않는다. 그렇기 때문에도 $\ulcorner$순응$\lrcorner$$\ulcorner$표현$\lrcorner$의 성격과 형태를 외형상으로 더욱이 공간상에서는 뚜렷하게 경계

  • PDF

A vision-based system for long-distance remote monitoring of dynamic displacement: experimental verification on a supertall structure

  • Ni, Yi-Qing;Wang, You-Wu;Liao, Wei-Yang;Chen, Wei-Huan
    • Smart Structures and Systems
    • /
    • v.24 no.6
    • /
    • pp.769-781
    • /
    • 2019
  • Dynamic displacement response of civil structures is an important index for in-construction and in-service structural condition assessment. However, accurately measuring the displacement of large-scale civil structures such as high-rise buildings still remains as a challenging task. In order to cope with this problem, a vision-based system with the use of industrial digital camera and image processing has been developed for long-distance, remote, and real-time monitoring of dynamic displacement of supertall structures. Instead of acquiring image signals, the proposed system traces only the coordinates of the target points, therefore enabling real-time monitoring and display of displacement responses in a relatively high sampling rate. This study addresses the in-situ experimental verification of the developed vision-based system on the Canton Tower of 600 m high. To facilitate the verification, a GPS system is used to calibrate/verify the structural displacement responses measured by the vision-based system. Meanwhile, an accelerometer deployed in the vicinity of the target point also provides frequency-domain information for comparison. Special attention has been given on understanding the influence of the surrounding light on the monitoring results. For this purpose, the experimental tests are conducted in daytime and nighttime through placing the vision-based system outside the tower (in a brilliant environment) and inside the tower (in a dark environment), respectively. The results indicate that the displacement response time histories monitored by the vision-based system not only match well with those acquired by the GPS receiver, but also have higher fidelity and are less noise-corrupted. In addition, the low-order modal frequencies of the building identified with use of the data obtained from the vision-based system are all in good agreement with those obtained from the accelerometer, the GPS receiver and an elaborate finite element model. Especially, the vision-based system placed at the bottom of the enclosed elevator shaft offers better monitoring data compared with the system placed outside the tower. Based on a wavelet filtering technique, the displacement response time histories obtained by the vision-based system are easily decomposed into two parts: a quasi-static ingredient primarily resulting from temperature variation and a dynamic component mainly caused by fluctuating wind load.

Fit Tests for Second-class Half Masks (2급 방진마스크 밀착도 평가)

  • Cho, Kee Hong;Kim, Hyun Soo;Choi, Ah Rum;Chun, Ji Young;Kang, Tae Won;Kim, Min Su;Park, Kyeong Hak;Kim, Ze One
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.32 no.2
    • /
    • pp.146-152
    • /
    • 2022
  • Objectives: The purpose of this study is to confirm whether there is a factor to affect the evaluation of fit test of a 2nd class half masks using a OPC test method. Methods: Total 34 adults including Males and Females were tested using OPC-based fit testing equipment while wearing a 2nd class half filtered mask. Results: 1. The result of measuring face dimensions using different tools such as a 3D scanner and digital calipers revealed that the variation of lip width was not statistically significant because there was only a difference of about 4 mm. However, it showed that a difference in face length was statistically significant enough with 10 mm(p<0.000). 2. The fit factor for each exercise stage according to gender was the highest at 124.54(p<0.001) in Step 3, and the fit factor was the lowest at 73.75 in Step 1. 3. In the evaluation of the degree of fit factor according to gender, female passed 67.44%, which was higher than the value in male(p<0.038). 4. The acceptance rate of the group having a face length of shorter than 110 mm was 91.67%. On the other hand, the acceptance rate of the group with a face length of longer than 110 mm was 47.27%(p<0.000). 5. The fit test was possible because the fit factor with 2nd class half masks corresponding to FFP1(Filtering Face Piece 1) was passed 55% or more. Conclusions: The test results showed that using a 2nd class half filtered mask, it is important to wear a properly designed mask so that face size does not affect the fit factor.

A Study on the Interior Design of a Dog-Friendly Hotel Using Deepfake DID for Alleviation of Pet loss Syndrome

  • Hwang, Sungi;Ryu, Gihwan
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.248-252
    • /
    • 2022
  • The environment refers to what is surrounded by something during human life. This environment is related to the way humans live, and presents various problems on how to perceive the surrounding environment and how the behaviors that constitute the environment support the elements necessary for human life. Humans have an interest in the supportability of the environment as the interrelationship increases as humans perceive and understand the environment and accept the factors supported by the environment. In space, human movement starts from one space to the next and exchanges stimuli and reactions with the environment until reaching a target point. These human movements start with subjective judgment and during gait movement, the spatial environment surrounding humans becomes a collection of information necessary for humans and gives stimulation. will do. In this process, in particular, humans move along the movement path through movement in space and go through displacement perception and psychological changes, and recognize a series of spatial continuity. An image of thinking is formed[1]. In this process, spatial experience is perceived through the process of filtering by the senses in the real space, and the result of cognition is added through the process of subjective change accompanied by memory and knowledge, resulting in human movement. As such, the spatial search behavior begins with a series of perceptual and cognitive behaviors that arise in the process of human beings trying to read meaning from objects in the environment. Here, cognition includes the psychological process of sorting out and judging what the information is in the process of reading the meaning of the external environment, conditions, and material composition, and perception is the process of accepting information as the first step. It can be said to be the cognitive ability to read the meaning of the environment given to humans. Therefore, if we can grasp the perception of space while moving and human behavior as a response to perception, it will be possible to predict how to grasp it from a human point of view in a space that does not exist. Modern people have the theme of reminiscing dog-friendly hotels for the healing of petloss syndrome, and this thesis attempts to approach the life of companions.

A Hermenutic Phenomenological Study of Psychological Burnout Experiences due to Emotional Contagion (정서전염으로 인한 심리적 소진 경험에 관한 해석현상학적 연구)

  • Hyunju Ha;Jinsook Kim;Doyoun An
    • Korean Journal of Culture and Social Issue
    • /
    • v.30 no.2
    • /
    • pp.121-157
    • /
    • 2024
  • This study explored the essence of psychological burnout experiences due to emotional contagion using a hermeneutic phenomenological approach. In-depth interviews were conducted on 9 participants who work in fields that are subject to emotional contagion. Data analysis was conducted by using van Manen's methodology, insisting that the pure description of an experience can be enriched by adding interpretation. The emotional contagion experiences were identified through this process and the findings were categorized into 3 core themes, 8 essential themes, and 35 subthemes. The first core theme is "emotions in constant exchange". This theme included two essential themes: 'various channels of emotional contagion' and 'subjective states that change depending on the transmitted emotions'. The second core theme, "filtering the experience of emotional contagion" included the essential themes of 'the characteristics susceptible to the emotions of others', 'attitudes of spreading negative emotions' and 'situations that makes one feel overwhelmed by emotions'. The final core theme, "from burnout by emotional contagion to communication" was categorized into the following essential themes: 'burnout-inducing entangled interactions', 'moving toward communication and connection' and 'recovery after psychological burnout'. Finally, the implications and suggestions for future research were discussed by summarizing the core contents of each themes.

The Study on the Reduction of Patient Surface Dose Through the use of Copper Filter in a Digital Chest Radiography (디지털 흉부 촬영에서 구리필터사용에 따른 환자 표면선량 감소효과에 관한 연구)

  • Shin, Soo-In;Kim, Chong-Yeal;Kim, Sung-Chul
    • Journal of radiological science and technology
    • /
    • v.31 no.3
    • /
    • pp.223-228
    • /
    • 2008
  • The most critical point in the medical use of radiation is to minimize the patient's entrance dose while maintaining the diagnostic function. Low-energy photons (long wave X-ray) among diagnostic X-rays are unnecessary because they are mostly absorbed and contribute the increase of patient's entrance dose. The most effective method to eliminate the low-energy photons is to use the filtering plate. The experiments were performed by observing the image quality. The skin entrance dose was 0.3 mmCu (copper) filter. A total of 80 images were prepared as two sets of 40 cuts. In the first set (of 40 cuts), 20 cuts were prepared for the non-filter set and another 20 cuts for the Cu filter of signal + noise image set. In the second set of 40 cuts, 20 cuts were prepared for the non-filter set and another 20 cuts for the Cu filter of non-signal image (noisy image) with random location of diameter 4 mm and 3 mm thickness of acryl disc for ROC signal at the chest phantom. P(S/s) and P(S/n) were calculated and the ROC curve was described in terms of sensitivity and specificity. Accuracy were evaluated after reading by five radiologists. The number of optically observable lesions was counted through ANSI chest phantom and contrast-detail phantom by recommendation of AAPM when non-filter or Cu filter was used, and the skin entrance dose was also measured for both conditions. As the result of the study, when the Cu filter was applied, favorable outcomes were observed on, the ROC Curve was located on the upper left area, sensitivity, accuracy and the number of CD phantom lesions were reasonable. Furthermore, if skin entrance dose was reduced, the use of additional filtration may be required to be considered in many other cases.

  • PDF

High-resolution shallow marine seismic survey using an air gun and 6 channel streamer (에어건과 6채널 스트리머를 이용한 고해상 천부 해저 탄성파탐사)

  • Lee Ho-Young;Park Keun-Pil;Koo Nam-Hyung;Park Young-Soo;Kim Young-Gun;Seo Gab-Seok;Kang Dong-Hyo;Hwang Kyu-Duk;Kim Jong-Chon
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2002.09a
    • /
    • pp.24-45
    • /
    • 2002
  • For the last several decades, high-resolution shallow marine seismic technique has been used for various resources, engineering and geological surveys. Even though the multichannel method is powerful to image subsurface structures, single channel analog survey has been more frequently employed in shallow water exploration, because it is more expedient and economical. To improve the quality of the high-resolution seismic data economically, we acquired digital seismic data using a small air gun, 6 channel streamer and PC-based system, performed data processing and produced high-resolution seismic sections. For many years, such test acquisitions were performed with other studies which have different purposes in the area of off Pohang, Yellow Sea and Gyeonggi-bay. Basic data processing was applied to the acquired data and the processing sequence included gain recovery, deconvolution, filtering, normal moveout, static corrections, CMP gathering and stacking. Examples of digitally processed sections were shown and compared with analog sections. Digital seismic sections have a much higher resolution after data processing. The results of acquisition and processing show that the high-resolution shallow marine seismic surveys using a small air gun, 6 channel streamer and PC-based system may be an effective way to image shallow subsurface structures precisely.

  • PDF

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.