您的瀏覽器不支援JavaScript語法,網站的部份功能在JavaScript沒有啟用的狀態下無法正常使用。

中央研究院 資訊科學研究所

活動訊息

友善列印

列印可使用瀏覽器提供的(Ctrl+P)功能

學術演講

:::

Rethink Computer Vision with Global Public Cameras

  • 講者Sara Aghajanzadeh 小姐 (School of Electrical and Computer Engineering, Purdue University)
    邀請人:張原豪
  • 時間2019-09-27 (Fri.) 14:00 ~ 16:00
  • 地點資訊所新館106演講廳
摘要

 Computer vision has been widely used in discovering patterns from complex and unstructured data such as videos and images. Successful techniques need vast amounts of data and labels for training and validation. Creating datasets and labels require significant efforts.  A team at Purdue University creates datasets using global public cameras that can provide real-time visual data. These cameras can continuously stream live views of national parks, zoos, city halls, streets, university campuses, highways, shopping malls, and so on.  The stationary cameras have contextual information (such as time and location) about the visual data. By cross-referencing with other sources of data (such as weather and event calendar), it is possible to label the data automatically. This system is a foundation for many research topics related to analyzing visual data, such as (1) how can this system automatically produce labels for computer vision, (2) how to automatically place cameras to meet restrictions of computer vision, and (3) how to protect privacy of video streams (real-time visual data) used in computer vision. 

BIO
Sara Aghajanzadeh is a graduate student in the School of Electrical and Computer Engineering of Purdue University. She graduated with a bachelor of science degree in Computer Information Technology and minor in Management from Purdue University. She is the lead author of “Observing Human Behavior through Worldwide Network Cameras” book chapter. She is a co-author of “Dynamic Sampling in Convolutional Neural Networks for Imbalanced Data Classification” IEEE MIPR 2018 and “See the World through Network Cameras” IEEE Computer (accepted). Her research interest lies in computer vision, embedded and low-power vision.