• Title/Summary/Keyword: Window Sensor

Search Result 186, Processing Time 0.027 seconds

Image Contrast and Sunlight Readability Enhancement for Small-sized Mobile Display (소형 모바일 디스플레이의 영상 컨트라스트 및 야외시인성 개선 기법)

  • Chung, Jin-Young;Hossen, Monir;Choi, Woo-Young;Kim, Ki-Doo
    • Journal of IKEEE
    • /
    • v.13 no.4
    • /
    • pp.116-124
    • /
    • 2009
  • Recently the CPU performance of modem chipsets or multimedia processors of mobile phone is as high as notebook PC. That is why mobile phone has been emerged as a leading ICON on the convergence of consumer electronics. The various applications of mobile phone such as DMB, digital camera, video telephony and internet full browsing are servicing to consumers. To meet all the demands the image quality has been increasingly important. Mobile phone is a portable device which is widely using in both the indoor and outside environments, so it is needed to be overcome to deteriorate image quality depending on environmental light source. Furthermore touch window is popular on the mobile display panel and it makes contrast loss because of low transmittance of ITO film. This paper presents the image enhancement algorithm to be embedded on image enhancement SoC. In contrast enhancement, we propose Clipped histogram stretching method to make it adaptive with the input images, while S-shape curve and gain/offset method for the static application And CIELCh color space is used to sunlight readability enhancement by controlling the lightness and chroma components which is depended on the sensing value of light sensor. Finally the performance of proposed algorithm is evaluated by using histogram, RGB pixel distribution, entropy and dynamic range of resultant images. We expect that the proposed algorithm is suitable for image enhancement of embedded SoC system which is applicable for the small-sized mobile display.

  • PDF

Affecting Factor Analysis for Respiration Rate Measurement Using Depth Camera (깊이 카메라를 이용한 호흡률 측정에 미치는 영향 요인 분석)

  • Oh, Kyeong-Taek;Shin, Cheung-Soo;Kim, Jeongmin;Jang, Won-Seuk;Yoo, Sun-Kook
    • Science of Emotion and Sensibility
    • /
    • v.19 no.3
    • /
    • pp.81-88
    • /
    • 2016
  • The purpose of this research was to analyze several factors that can affect the respiration rate measurement using the Creative Senz3D depth camera. Depth error and noise of the depth camera were considered as affecting factors. Ambient light was also considered. The result of this study showed that the depth error was increased with an increase of the distance between subject and depth camera. The result also showed depth asymmetry in the depth image. The depth values measured in right region of the depth image was higher than real distance and depth values measured in left of the depth image was lower than real distance. The difference error of the depth was influenced by the orientation of the depth camera. The noise created by the depth camera was increased as the distance between subject and depth camera was increased and it decreased as the window size was increased which was used to calculate noise level. Ambient light seems to have no influence on the depth value. In real environment, we measured respiration rate. Participants were asked to breathe 20 times. We could find that the respiration rate which was measured from depth camera shows excellent agreement with that of participants.

Improvement of Cloud-data Filtering Method Using Spectrum of AERI (AERI 스펙트럼 분석을 통한 구름에 영향을 받은 스펙트럼 자료 제거 방법 개선)

  • Cho, Joon-Sik;Goo, Tae-Young;Shin, Jinho
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.2
    • /
    • pp.137-148
    • /
    • 2015
  • The National Institute of Meteorological Research (NIMR) has operated the Fourier Transform InfraRed (FTIR) spectrometer which is the Atmospheric Emitted Radiance Interferometer (AERI) in Anmyeon island, Korea since June 2010. The ground-based AERI with similar hyper-spectral infrared sensor to satellite could be an alternative way to validate satellite-based remote sensing. In this regard, the NIMR has focused on the improvement of retrieval quality from the AERI, particularly cloud-data filtering method. The AERI spectrum which is measured on a typical clear day is selected reference spectrum and we used region of atmospheric window. We performed test of threshold in order to select valid threshold. We retrieved methane using new method which is used reference spectrum, and the other method which is used KLAPS cloud cover information, each retrieved methane was compared with that of ground-based in-situ measurements. The quality of AERI methane retrievals of new method was significantly more improved than method of used KLAPS. In addition, the comparison of vertical total column of methane from AERI and GOSAT shows good result.

Committee Learning Classifier based on Attribute Value Frequency (속성 값 빈도 기반의 전문가 다수결 분류기)

  • Lee, Chang-Hwan;Jung, In-Chul;Kwon, Young-S.
    • Journal of KIISE:Databases
    • /
    • v.37 no.4
    • /
    • pp.177-184
    • /
    • 2010
  • In these day, many data including sensor, delivery, credit and stock data are generated continuously in massive quantity. It is difficult to learn from these data because they are large in volume and changing fast in their concepts. To handle these problems, learning methods based in sliding window methods over time have been used. But these approaches have a problem of rebuilding models every time new data arrive, which requires a lot of time and cost. Therefore we need very simple incremental learning methods. Bayesian method is an example of these methods but it has a disadvantage which it requries the prior knowledge(probabiltiy) of data. In this study, we propose a learning method based on attribute values. In the proposed method, even though we don't know the prior knowledge(probability) of data, we can apply our new method to data. The main concept of this method is that each attribute value is regarded as an expert learner, summing up the expert learners lead to better results. Experimental results show our learning method learns from data very fast and performs well when compared to current learning methods(decision tree and bayesian).

Effectiveness of Real-time Oxygen Control in Fresh Produce Container Equipped with Gas-diffusion Tube (기체확산 튜브 부착 신선 농산물 용기에서의 실시간 산소농도 제어의 효과)

  • Jo, Yun Hee;An, Duck Soon;Lee, Dong Sun
    • KOREAN JOURNAL OF PACKAGING SCIENCE & TECHNOLOGY
    • /
    • v.19 no.3
    • /
    • pp.119-123
    • /
    • 2013
  • Simplified control logic was devised to fabricate and operate the modified atmosphere (MA) container of fresh produce equipped with gas-diffusion tube whose opening/closing was controlled in response to real time $O_2$ concentration. This is a simplified ramification of the previously developed control logic using both $O_2$ and $CO_2$ concentrations ([$O_2$] & [$CO_2$]). The developed logic was applied to and tested by a container system filled with spinach at $10^{\circ}C$ having optimum MA window of [$O_2$] of 7~10% and [$CO_2$] of 5~10%. It was shown that setting the proper on-off limit (11%) for $O_2$ control based on the assumed relationship $[O_2]+[CO_2]$=21% could attain the desired $CO_2$ concentration just below the upper tolerance limit ($[CO_2]_H$, 10%). The $O_2$ control point can be the lower tolerance limit or adjusted one (21-$[CO_2]_H$) depending on the commodity's MA requirement. The developed logic using single $O_2$ sensor could attain the equilibrated [$O_2$] of 11% with [$CO_2$] of 8~9% which was desired and similar to that of its predecessor ([$O_2$] of 9~10% with [$CO_2$] of 10%) using both $O_2$ and $CO_2$ sensors. Both MA containers (one only with single $O_2$ sensor control and one with $O_2$ and $CO_2$ sensors) could also keep the spinach quality without significant difference between them, but significantly better than perforated control package of air.

  • PDF

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.