• Title/Summary/Keyword: array processing

Search Result 1,009, Processing Time 0.037 seconds

A Study on Generating Virtual Shot-Gathers from Traffic Noise Data (교통차량진동 자료에 대한 최적 가상공통송신원모음 제작 연구)

  • Woohyun Son;Yunsuk Choi;Seonghyung Jang;Donghoon Lee;Snons Cheong;Yonghwan Joo;Byoung-yeop Kim
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.4
    • /
    • pp.229-237
    • /
    • 2023
  • The use of artificial sources such as explosives and mechanical vibrations for seismic exploration in urban areas poses challenges, as the vibrations and noise generated can lead to complaints. As an alternative to artificial sources, the surface waves generated by traffic noise can be used to investigate the subsurface properties of urban areas. However, traffic noise takes the form of plane waves moving continuously at a constant speed. To apply existing surface wave processing/inversion techniques to traffic noise, the recorded data need to be transformed into a virtual shot gather format using seismic interferometry. In this study, various seismic interferometry methods were applied to traffic noise data, and the optimal method was derived by comparing the results in the Radon and F-K domains. Additionally, the data acquired using various receiver arrays were processed using seismic interferometry, and the results were compared and analyzed to determine the most optimal receiver array direction for exploration.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

GPR Development for Landmine Detection (지뢰탐지를 위한 GPR 시스템의 개발)

  • Sato, Motoyuki;Fujiwara, Jun;Feng, Xuan;Zhou, Zheng-Shu;Kobayashi, Takao
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.4
    • /
    • pp.270-279
    • /
    • 2005
  • Under the research project supported by Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT), we have conducted the development of GPR systems for landmine detection. Until 2005, we have finished development of two prototype GPR systems, namely ALIS (Advanced Landmine Imaging System) and SAR-GPR (Synthetic Aperture Radar-Ground Penetrating Radar). ALIS is a novel landmine detection sensor system combined with a metal detector and GPR. This is a hand-held equipment, which has a sensor position tracking system, and can visualize the sensor output in real time. In order to achieve the sensor tracking system, ALIS needs only one CCD camera attached on the sensor handle. The CCD image is superimposed with the GPR and metal detector signal, and the detection and identification of buried targets is quite easy and reliable. Field evaluation test of ALIS was conducted in December 2004 in Afghanistan, and we demonstrated that it can detect buried antipersonnel landmines, and can also discriminate metal fragments from landmines. SAR-GPR (Synthetic Aperture Radar-Ground Penetrating Radar) is a machine mounted sensor system composed of B GPR and a metal detector. The GPR employs an array antenna for advanced signal processing for better subsurface imaging. SAR-GPR combined with synthetic aperture radar algorithm, can suppress clutter and can image buried objects in strongly inhomogeneous material. SAR-GPR is a stepped frequency radar system, whose RF component is a newly developed compact vector network analyzers. The size of the system is 30cm x 30cm x 30 cm, composed from six Vivaldi antennas and three vector network analyzers. The weight of the system is 17 kg, and it can be mounted on a robotic arm on a small unmanned vehicle. The field test of this system was carried out in March 2005 in Japan.

THE CURRENT STATUS OF BIOMEDICAL ENGINEERING IN THE USA

  • Webster, John G.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1992 no.05
    • /
    • pp.27-47
    • /
    • 1992
  • Engineers have developed new instruments that aid in diagnosis and therapy Ultrasonic imaging has provided a nondamaging method of imaging internal organs. A complex transducer emits ultrasonic waves at many angles and reconstructs a map of internal anatomy and also velocities of blood in vessels. Fast computed tomography permits reconstruction of the 3-dimensional anatomy and perfusion of the heart at 20-Hz rates. Positron emission tomography uses certain isotopes that produce positrons that react with electrons to simultaneously emit two gamma rays in opposite directions. It locates the region of origin by using a ring of discrete scintillation detectors, each in electronic coincidence with an opposing detector. In magnetic resonance imaging, the patient is placed in a very strong magnetic field. The precessing of the hydrogen atoms is perturbed by an interrogating field to yield two-dimensional images of soft tissue having exceptional clarity. As an alternative to radiology image processing, film archiving, and retrieval, picture archiving and communication systems (PACS) are being implemented. Images from computed radiography, magnetic resonance imaging (MRI), nuclear medicine, and ultrasound are digitized, transmitted, and stored in computers for retrieval at distributed work stations. In electrical impedance tomography, electrodes are placed around the thorax. 50-kHz current is injected between two electrodes and voltages are measured on all other electrodes. A computer processes the data to yield an image of the resistivity of a 2-dimensional slice of the thorax. During fetal monitoring, a corkscrew electrode is screwed into the fetal scalp to measure the fetal electrocardiogram. Correlations with uterine contractions yield information on the status of the fetus during delivery To measure cardiac output by thermodilution, cold saline is injected into the right atrium. A thermistor in the right pulmonary artery yields temperature measurements, from which we can calculate cardiac output. In impedance cardiography, we measure the changes in electrical impedance as the heart ejects blood into the arteries. Motion artifacts are large, so signal averaging is useful during monitoring. An intraarterial blood gas monitoring system permits monitoring in real time. Light is sent down optical fibers inserted into the radial artery, where it is absorbed by dyes, which reemit the light at a different wavelength. The emitted light travels up optical fibers where an external instrument determines O2, CO2, and pH. Therapeutic devices include the electrosurgical unit. A high-frequency electric arc is drawn between the knife and the tissue. The arc cuts and the heat coagulates, thus preventing blood loss. Hyperthermia has demonstrated antitumor effects in patients in whom all conventional modes of therapy have failed. Methods of raising tumor temperature include focused ultrasound, radio-frequency power through needles, or microwaves. When the heart stops pumping, we use the defibrillator to restore normal pumping. A brief, high-current pulse through the heart synchronizes all cardiac fibers to restore normal rhythm. When the cardiac rhythm is too slow, we implant the cardiac pacemaker. An electrode within the heart stimulates the cardiac muscle to contract at the normal rate. When the cardiac valves are narrowed or leak, we implant an artificial valve. Silicone rubber and Teflon are used for biocompatibility. Artificial hearts powered by pneumatic hoses have been implanted in humans. However, the quality of life gradually degrades, and death ensues. When kidney stones develop, lithotripsy is used. A spark creates a pressure wave, which is focused on the stone and fragments it. The pieces pass out normally. When kidneys fail, the blood is cleansed during hemodialysis. Urea passes through a porous membrane to a dialysate bath to lower its concentration in the blood. The blind are able to read by scanning the Optacon with their fingertips. A camera scans letters and converts them to an array of vibrating pins. The deaf are able to hear using a cochlear implant. A microphone detects sound and divides it into frequency bands. 22 electrodes within the cochlea stimulate the acoustic the acoustic nerve to provide sound patterns. For those who have lost muscle function in the limbs, researchers are implanting electrodes to stimulate the muscle. Sensors in the legs and arms feed back signals to a computer that coordinates the stimulators to provide limb motion. For those with high spinal cord injury, a puff and sip switch can control a computer and permit the disabled person operate the computer and communicate with the outside world.

  • PDF

Quantitative Analysis of Digital Radiography Pixel Values to absorbed Energy of Detector based on the X-Ray Energy Spectrum Model (X선 스펙트럼 모델을 이용한 DR 화소값과 디텍터 흡수에너지의 관계에 대한 정량적 분석)

  • Kim Do-Il;Kim Sung-Hyun;Ho Dong-Su;Choe Bo-young;Suh Tae-Suk;Lee Jae-Mun;Lee Hyoung-Koo
    • Progress in Medical Physics
    • /
    • v.15 no.4
    • /
    • pp.202-209
    • /
    • 2004
  • Flat panel based digital radiography (DR) systems have recently become useful and important in the field of diagnostic radiology. For DRs with amorphous silicon photosensors, CsI(TI) is normally used as the scintillator, which produces visible light corresponding to the absorbed radiation energy. The visible light photons are converted into electric signal in the amorphous silicon photodiodes which constitute a two dimensional array. In order to produce good quality images, detailed behaviors of DR detectors to radiation must be studied. The relationship between air exposure and the DR outputs has been investigated in many studies. But this relationship was investigated under the condition of the fixed tube voltage. In this study, we investigated the relationship between the DR outputs and X-ray in terms of the absorbed energy in the detector rather than the air exposure using SPEC-l8, an X-ray energy spectrum model. Measured exposure was compared with calculated exposure for obtaining the inherent filtration that is a important input variable of SPEC-l8. The absorbed energy in the detector was calculated using algorithm of calculating the absorbed energy in the material and pixel values of real images under various conditions was obtained. The characteristic curve was obtained using the relationship of two parameter and the results were verified using phantoms made of water and aluminum. The pixel values of the phantom image were estimated and compared with the characteristic curve under various conditions. It was found that the relationship between the DR outputs and the absorbed energy in the detector was almost linear. In a experiment using the phantoms, the estimated pixel values agreed with the characteristic curve, although the effect of scattered photons introduced some errors. However, effect of a scattered X-ray must be studied because it was not included in the calculation algorithm. The result of this study can provide useful information about a pre-processing of digital radiography.

  • PDF

2-D/3-D Seismic Data Acquisition and Quality Control for Gas Hydrate Exploration in the Ulleung Basin (울릉분지 가스하이드레이트 2/3차원 탄성파 탐사자료 취득 및 품질관리)

  • Koo, Nam-Hyung;Kim, Won-Sik;Kim, Byoung-Yeop;Cheong, Snons;Kim, Young-Jun;Yoo, Dong-Geun;Lee, Ho-Young;Park, Keun-Pil
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.2
    • /
    • pp.127-136
    • /
    • 2008
  • To identify the potential area of gas hydrate in the Ulleung Basin, 2-D and 3-D seismic surveys using R/V Tamhae II were conducted in 2005 and 2006. Seismic survey equipment consisted of navigation system, recording system, streamer cable and air-gun source. For reliable velocity analysis in a deep sea area where water depths are mostly greater than 1,000 m and the target depth is up to about 500 msec interval below the seafloor, 3-km-long streamer and 1,035 $in^3$ tuned air-gun array were used. During the survey, a suite of quality control operations including source signature analysis, 2-D brute stack, RMS noise analysis and FK analysis were performed. The source signature was calculated to verify its conformity to quality specification and the gun dropout test was carried out to examine signature changes due to a single air gun's failure. From the online quality analysis, we could conclude that the overall data quality was very good even though some seismic data were affected by swell noise, parity error, spike noise and current rip noise. Especially, by checking the result of data quality enhancement using FK filtering and missing trace restoration technique for the 3-D seismic data inevitably contaminated with current rip noises, the acquired data were accepted and the field survey could be conducted continuously. Even in survey areas where the acquired data would be unsuitable for quality specification, the marine seismic survey efficiency could be improved by showing the possibility of noise suppression through onboard data processing.

Can We Hear the Shape of a Noise Source\ulcorner (소음원의 모양을 들어서 상상할 수 있을까\ulcorner)

  • Kim, Yang-Hann
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.14 no.7
    • /
    • pp.586-603
    • /
    • 2004
  • One of the subtle problems that make noise control difficult for engineers is “the invisibility of noise or sound.” The visual image of noise often helps to determine an appropriate means for noise control. There have been many attempts to fulfill this rather challenging objective. Theoretical or numerical means to visualize the sound field have been attempted and as a result, a great deal of progress has been accomplished, for example in the field of visualization of turbulent noise. However, most of the numerical methods are not quite ready to be applied practically to noise control issues. In the meantime, fast progress has made it possible instrumentally by using multiple microphones and fast signal processing systems, although these systems are not perfect but are useful. The state of the art system is recently available but still has many problematic issues : for example, how we can implement the visualized noise field. The constructed noise or sound picture always consists of bias and random errors, and consequently it is often difficult to determine the origin of the noise and the spatial shape of noise, as highlighted in the title. The first part of this paper introduces a brief history, which is associated with “sound visualization,” from Leonardo da Vinci's famous drawing on vortex street (Fig. 1) to modern acoustic holography and what has been accomplished by a line or surface array. The second part introduces the difficulties and the recent studies. These include de-Dopplerization and do-reverberation methods. The former is essential for visualizing a moving noise source, such as cars or trains. The latter relates to what produces noise in a room or closed space. Another mar issue associated this sound/noise visualization is whether or not Ivecan distinguish mutual dependence of noise in space : for example, we are asked to answer the question, “Can we see two birds singing or one bird with two beaks?"

Characteristics of the Electro-Optical Camera(EOC) (다목적실용위성탑재 전자광학카메라(EOC)의 성능 특성)

  • Seunghoon Lee;Hyung-Sik Shim;Hong-Yul Paik
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.3
    • /
    • pp.213-222
    • /
    • 1998
  • Electro-Optical Camera(EOC) is the main payload of the KOrea Multi-Purpose SATellite(KOMPSAT) with the mission of cartography to build up a digital map of Korean territory including a Digital Terrain Elevation Map(DTEM). This instalment which comprises EOC Sensor Assembly and EOC Electronics Assembly produces the panchromatic images of 6.6 m GSD with a swath wider than 17 km by push-broom scanning and spacecraft body pointing in a visible range of wavelength, 510~730 nm. The high resolution panchromatic image is to be collected for 2 minutes during 98 minutes of orbit cycle covering about 800 km along ground track, over the mission lifetime of 3 years with the functions of programmable gain/offset and on-board image data storage. The image of 8 bit digitization, which is collected by a full reflective type F8.3 triplet without obscuration, is to be transmitted to Ground Station at a rate less than 25 Mbps. EOC was elaborated to have the performance which meets or surpasses its requirements of design phase. The spectral response, the modulation transfer function, and the uniformity of all the 2592 pixel of CCD of EOC are illustrated as they were measured for the convenience of end-user. The spectral response was measured with respect to each gain setup of EOC and this is expected to give the capability of generating more accurate panchromatic image to the users of EOC data. The modulation transfer function of EOC was measured as greater than 16 % at Nyquist frequency over the entire field of view, which exceeds its requirement of larger than 10 %. The uniformity that shows the relative response of each pixel of CCD was measured at every pixel of the Focal Plane Array of EOC and is illustrated for the data processing.

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.