12th Conference on Computer and Robot Vision

Halifax, Nova Scotia.   June 3-5, 2015.

Welcome to the home page for CRV 2015 which will be held at Dalhousie and at Saint Mary's Universities.

CRV is an annual conference hosted in Canada, and co-located with Graphics Interface (GI) and Artificial Intelligence (AI). A single registration covers attendance in all three conferences. Please see the AI/GI/CRV general conference website for more information.

The CRV proceedings are published through the Conference Publishing Services (CPS). Accepted papers will be submitted to Xplore. Xplore has published CRV accepted papers since 2004. See archive for links to previous years' papers.

News

  • A draft of the conference program has been posted.

  • The online submission site for CRV 2015 is now open. Submission instructions are found at the "Submissions" tab.

  • Call For Papers (PDF)

  • We are excited to announce the Keynote speakers for 2015: Kostas Daniilidis (U. Penn) and Demetri Terzopoulos (UCLA).

Important Dates:

Paper submission March 3, 2015
Acceptance/rejection notification March 27, 2015
Revised camera-ready papers April 10, 2015
Early registration April 20, 2015 (Registration Website)
Conference June 3-5, 2015

Conference History

In 2004, the 17th Vision Interface conference was renamed the 1st Canadian Conference on Computer and Robot Vision. In 2011, the name was shortened to Conference on Computer and Robot Vision.

CRV is sponsored by the Canadian Image Processing and Pattern Recognition Society (CIPPRS).

CIPPRS
CIPPRS

CRV 2015 Program

Below is a draft of the full CRV conference program.

The higher-level view of the joint conference program which also includes the AI and GI meetings will be found here.


DAY 1: Wednesday, June 3, 2015

8:30-9:00 Joint Conference Welcome

9-10:00 Symposium: Novel Imaging Techniques

10:00-10:30 Oral Presentations

  • Development of a Low Cost Gamma-Ray Imaging System Using Handheld Scintillation Detectors for Visual Surveying of Radiation Fields with Robots  
      Alex Miller, Rachid Machrafi

  • Deep Learning Architectures for Soil Property Prediction  
      Matthew Veres, Griffin Lacey, Graham W. Taylor

12:30-14:00 Lunch

11:00-12:00 KEYNOTE: Kostas Daniilidis

12:00-12:30 Oral Presentations

  • Drifter Sensor Network for Environmental Monitoring  
     Daniel Boydstun, Matthew Farich, John McCarthy III, Silas Rubinson, Zachary Smith, Ioannis Rekleitis
  • The Battle for Filter Supremacy: A Comparative Study of the Multi-State Constraint Kalman Filter and the Sliding Window Filter  
     Lee Clement, Valentin Peretroukhin, Jacob Lambert, and Jonathan Kelly

12:30-14:00 Lunch

2:00-3:00 LIDAR Symposium

3:00-3:30 Oral Presentations

  • Registration of Noisy Point Clouds using Virtual Interest Points  
     Mirza Tahir Ahmed, Mustafa Mohamad, Joshua A. Marshall, Michael Greenspan
  • Simultaneous Scene Reconstruction and Auto-calibration using Constrained Iterative Closest Point for 3D Depth Sensor Array  
     Meng Xi Zhu, Christian Scharfenberger, Alexander Wong, David A. Clausi

4-5:30 POSTER SESSION (see bottom of page for list of posters)


DAY 2: Thursday, June 4, 2015

9-10:00 Autonomous Robots Symposium

10:00-10:30 Oral Presentations

  • Eyes in the Back of of Your Head: Robust Visual Teach & Repeat Using Multiple Stereo Cameras  
     Michael Paton, Francois Pomerleau, and Timothy D. Barfoot
  • Being in Two Places at Once: Smooth Visual Path Following on Globally Inconsistent Pose Graphs  
     Sebastian Kai van Es and Timothy D. Barfoot

11-12:30 Doctoral Dissertation Award Session

12:30-14:00 Lunch

2:00-2:30 Oral Presentations

  • At All Costs: A Comparison of Robust Cost Functions for Camera Correspondence Outliers  
     Kirk MacTavish and Timothy D. Barfoot
  • RKLT: 8 DOF real-time robust video tracking combing coarse RANSAC features and accurate fast template registration  
     Xi Zhang, Abhineet Singh, Martin Jagersand

2:30-3:30 Vision for Graphics Symposium

4:00-5:00 PM Intelligent Vehicles Symposium

5:00-5:30 PM Oral Presentations

  • Detection and Segmentation of Quasi-Planar Surfaces Through Expectation Maximization under a Planar Homography Constraint  
     Christopher Herbon, Gabriel Schumann, Klaus-Dietz Tönnies, Bernd Stock
  • Dense Depth Map Reconstruction from Sparse Measurements Using a Multilayer Conditional Random Field Model  
      Francis Li, Edward Li, Mohammad Javad Shafiee, Alexander Wong, John Zelek

DAY 3: Friday, June 5, 2015

9:00-10:00 AM Vision and Learning Symposium

10:00-10:30 AM Oral Presentations

  • Zero-Shot Object Recognition Using Semantic Label Vectors  
      Shujon Naha, Yang Wang
  • Fire Detection in Videos of Violent Crowds Acquired with Handheld Devices  
      Kawthar Moria, Alexandra Branzan Albu, Kui Wu

11:00-12:00 Object Detection Symposium

12:00-12:30 PM Oral Presentations

  • Feature Ranking in Dynamic Texture Clustering  
      Thanh Minh Nguyen, Jonathan Wu, and Dibyendu Mukherjee
  • CPS: 3D Compositional Parts Segmentation through Grasping  
      Safoura Rezapour Lakani, Mirela Popa, Antonio J. Rodriguez-Sanchez, Justus Piater

12:30-14:00 Lunch

2:00-3:00 KEYNOTE: Demetri Terzopoulos

3:00-3:30 Oral Presentations

  • Automated Localization of Brain Tumors in MRI Using Potential-K-means Clustering Algorithm  
      Ivan Cabria, Iker Gondra
  • Lung Nodule Classification Using Deep Features in CT Images  
      Devinder Kumar,Alexander Wong,David A. Clausi

4:00-5:00 PM Human-Robot Interaction / Assistive Tech Symposium

5:00-5:30 Oral Presentations

  • 3D vs. 2D: On the Importance of Registration for Hallucinating Faces under Unconstrained Poses  
      Chengchao Qu, Christian Herrmann, Eduardo Monari, Tobias Schuchert, Jürgen Beyerer
  • Reconstruction of 3-D Density Functions from Few Projections: Structural Assumptions for Graceful Degradation  
      Michael Cormier, Daniel J. Lizotte, Richard Mann

List of Posters (Wed. June 3)

Improved Threshold Selection by using Calibrated Probabilities for Random Forest Classifiers
Florian Baumann Jinghui Chen, Karsten Vogt, Bodo Rosenhahn

An Online Unsupervised Feature Selection and Its Application for Background Suppression
Thanh Minh Nguyen, Q. M. Jonathan Wu, and Dibyendu Mukherjee

Image Sensor Modeling: Noise and Linear Transformation Impacts on the Color Gamut
Mehdi Rezagholizadeh, James J. Clark

Mobile 3D Gaze Tracking Calibration
Christian Scheel, Oliver Staadt

A Perceptual Depth Shape-based CRF Model for Deformable Surface Labeling
Gang Hu, Derek Reilly, Qigang Gao,Arthur Bastos, Nhu loan Truong

Face Retrieval on Large-Scale Video Data
Christian Herrmann, Jürgen Beyerer

Latent SVM for Object Localization in Weakly Labeled Videos
Mrigank Rochan and Yang Wang

A novel clustering by key identification
Ehsan Fazl Ersi, Bo Wang, Maysum Panju

A Fingerprint Indexing Approach Using Multiple Similarity Measures and Spectral Clustering
Ntethelelo A. Mngenge, Linda Mthembu, Fulufhelo V. Nelwamondo, Cynthia Ngejane

A method for global nonrigid registration of multiple thin structures
Mark Brophy, Ayan Chaudhury and Steven S. Beauchemin and John L. Barron

Uncertainty Reduction via Heuristic Search Planning on Hybrid Metric/Topological Map
Qiwen Zhang, Ioannis Rekleitis, Gregory Dudek

Zhang Solution to the Eye Contact Correction Problem in Tele-presence Systems
Pierre Boulanger and Xiaozhou Zhou

Safe Close-Proximity and Physical Human-Robot Interaction Using Industrial Robots
Danial Nakhaeinia, Pascal Laferrière, Pierre Payeur, Robert Laganière

On Visual Servoing to Improve Performance of Robotic Grasping
Mona Gridseth, Katharina Hertkorn, Martin Jagersand

Evolution of Programs for Segmentation of Microscopic Images
Nawwaf KHARMA et al.

Preprocessing Realistic Video for Contactless Heart Rate Monitoring Using Video Magnification
Ahmed Alzahrani and Anthony Whitehead

A Hidden Markov Model for Vehicle Detection and Counting
Nicholas Miller, Mohan A. Thomas, Justin A. Eichel, Akshaya Mishra

An Integrated System for Mapping Red Clover Ground Cover Using Unmanned Aerial Vehicles, A Case Study in Precision Agriculture
Ammar M. Abuleil, Graham W. Taylor, Medhat Moussa

Shrink Wrapping Small Objects
Sricharana Rajagopal and Kaleem Siddiqi

Computer Vision Based Autonomous Robotic System for 3D Plant Growth Measurement
Ayan Chaudhury, Christopher Ward, Ali Talasaz, Alexander G. Ivanov, Norman P.A. Huner, Bernard Grodzinski, Rajni V. Patel, John L. Barron

Vision-based Collision Avoidance for Personal Aerial Vehicles using Dynamic Potential Fields
Faizan Rehmatullah, Jonathan Kelly

Orbital SLAM
Corinne Vassallo, Wennie Tabib, Kevin Peterson

Out-of-Core Surface Reconstruction from Large Point Sets for Infrastructure Inspection
Chen Xu, Simon Fréchet, Denis Laurendeau, François Mirallès

Situational Awareness for Manufacturing Applications
Olivier St-Martin Cormier, Andrew Phan and Frank P. Ferrie

Improving Segmentation Boundaries with Nonparametric Image Parsing
Hong Pan and Jochen Lang

Paper Submission Instructions

Please refer to the Call For Papers for information on the goals and scope of CRV.

The online submission site for CRV 2015 is now open. The paper submission deadline is 3 March 2015, 11:59 PM PST. Please note that this is a firm a deadline.

The CRV review process is single-blind: authors are not required to anonymize submissions. Submissions must be between 4 to 8 pages (two-column) long. Submissions less than 6 pages will most likely be considered for poster sessions only. Use the following templates to prepare your CRV submissions:

Please direct any questions regarding the paper sumbmission process to the conference co-chairs by emailing computerrobotvision2015 "at" gmail.com

CRV 2015 Co-Chairs

  • Michael Langer, McGill University
  • Faisal Qureshi, University of Ontario Institute of Technology (UOIT)

CRV 2015 Program Committee

  • Mohand Said Allili, Université du Québec en Outaoauis, Canada
  • Robert Allison, York University, Canada
  • Alexander Andreopoulos, IBM Research, Canada
  • John Barron, University of Western Ontario, Canada
  • Steven Beauchemin, University of Western Ontario, Canada
  • Robert Bergevin, Université Laval, Canada
  • Guillaume-Alexandre Bilodeau, École Polytechnique Montréal, Canada
  • Pierre Boulanger, University of Alberta, Canada
  • Jeffrey Boyd, University of Calgary, Canada
  • Marcus Brubaker, University of Toronto, Canada
  • Neil Bruce, University of Manitoba, Canada
  • Gustavo Carneiro, University of Adelaide, Australia
  • James Clark, McGill University, Canada
  • David Clausi, University of Waterloo, Canada
  • Dana Cobzas, University of Alberta, Canada
  • Jack Collier, DRDC Suffield, Canada
  • Kosta Derpanis, Ryerson University, Canada
  • Gregory Dudek, McGill University, Canada
  • James Elder, York University, Canada
  • Mark Eramian, University of Saskatchewan, Canada
  • Frank Ferrie, University of McGill, Canada
  • Alexander Ferworn, Ryerson University, Canada
  • Paul Fieguth, Waterloo, Canada
  • Brian Funt, Simon Fraser University, Canada
  • Philippe Giguère, Laval University, Canada
  • Yogesh Girdhar, Woods Hole Oceanographic Institute, USA
  • Minglun Gong, Memorial University of Newfoundland, Canada
  • Michael Greenspan, Queens University, Canada
  • Kamal Gupta, Simon Fraser University, Canada
  • Wolfgang Heidrich, University of British Columbia
  • Jessy Hoey, University of Waterloo, Canada
  • Andrew Hogue, University of Ontario Institute of Technology, Canada
  • Randy Hoover, South Dakota School of Mines and Technology, USA
  • Martin Jagersand, University of Alberta, Canada
  • Michael Jenkin, York University, Canada
  • Allan Jepson, University of Toronto, Canada
  • Hao Jiang, Boston College, USA
  • Pierre-Marc Jodoin, Université de Sherbrooke, Canada
  • Jonathan Kelly, University of Toronto, Canada
  • Dana Kulic, University of Waterloo, Canada
  • Robert Laganière, University of Ottawa, Canada
  • Jean-Francois Lalonde, Laval University, Canada
  • Jochen Lang, University of Ottawa, Canada
  • Cathy Laporte, ETS Montreal, Canada
  • Denis Laurendeau, Laval University, Canada
  • Howard Li, University of New Brunswick, Canada
  • Jim Little, University of British Columbia, Canada
  • Shahzad Malik, University of Toronto, Canada
  • Scott McCloskey, Honeywell Labs, USA
  • David Meger, McGill University, Canada
  • Jean Meunier, Universite de Montreal, Canada
  • Max Mignotte, Universite de Montreal, Canada
  • Gregor Miller, University of British Columbia, Canada
  • Greg Mori, Simon Fraser University, Canada
  • Christopher Pal, École Polytechnique Montréal, Canada
  • Pierre Payeur, University of Ottawa, Canada
  • Cédric Pradalier, Georgia Tech. Lorraine, France
  • Yiannis Rekleitis, University of South Carolina, USA
  • Junaed Sattar, Clarkson University, USA
  • Christian Scharfenberger, University of Waterloo, Canada
  • Angela Schoellig, University of Toronto, Canada
  • Kaleem Siddiqi, McGill University, Canada
  • Gunho Sohn, York University, Canada
  • Minas Spetsakis, York University, Canada
  • Uwe Stilla, Technische Universitaet Muenchen, Germany
  • Graham Taylor, University of Guelph, Canada
  • Lan Tian, Stanford University, USA
  • Chi Hay Tong, University of Oxford, United Kingdom
  • John Tsotsos, York University, Canada
  • Olga Veksler, University of Western Ontario, Canada
  • Ruisheng Wang, University of Calgary, Canada
  • Yang Wang, University of Manitoba, Canada
  • Steven Waslander, Waterloo University, Canada
  • Alexander Wong, Waterloo University, Canada
  • Robert Woodham, University of British Columbia
  • Yijun Xiao, University of Edinburgh United Kingdom
  • Herb Yang, University of Alberta, Canada
  • Alper Yilmaz, Ohio State University, USA
  • John Zelek, University of Waterloo Ontario, Canada
  • Hong Zhang, University of Alberta, Canada

CIPPRS Executive

  • President: Gregory Dudek, McGill University
  • Treasurer: John Barron, Western University
  • Secretary: Jim Little, University of British Columbia

Keynote Speakers

We will have two Keynote speakers in 2015: Kostas Daniilidis (U. Penn) on June 3, and Demetri Terzopoulos (UCLA) on June 5. More information including Title, Abstract, and Bio will be announced later in the Fall.






 







 







 







 

Invited Symposia

CRV 2015 will feature eight exciting symposia on subtopics related to computer and robot vision. The schedule will be announced by early April.

Novel Imaging Techniques

Wed. June 3, 9 AM- 10:00 AM

  • Marcus Brubaker, Univ. of Toronto

    "Efficient 3D Molecular Structure Estimation with Electron Cryomicroscopy"

    Abstract:
    Discovering the 3D structure of molecules such as proteins and viruses is a fundamental research problem in biology and medicine. Electron Cryomicroscopy (Cryo-EM) is a promising vision-based technique for structure estimation which attempts to reconstruct 3D structures from 2D images. This talk reviews the computational problems in Cryo-EM which are closely related to classical vision problems such as object detection, multiview reconstruction and computed tomography. Finally, a framework is introduced for reconstruction of 3D molecular structure which exploits modern methods for stochastic optimization and importance sampling. The result is a method which is efficient, robust to initialization and flexible.

  • Sebastien Roy, Univ. de Montreal

    "The Omnipolar Camera"

    Abstract: TBD

LIDAR

Wed. June 3, 2:00-3:00 PM

  • Ruisheng Wang, Univ. of Calgary

    "Scene Parsing Using Graph Matching on Street View Data"

    Abstract:
    In this talk, a street scene parsing scheme that takes advantages of images from perspective cameras and range data from LiDAR is presented. First, pre-processing on the image set is performed and the corresponding point cloud is segmented according to semantics and transformed into an image pose. A graph matching approach is introduced into our parsing framework, in order to identify similar sub-regions from training and test images in terms of both local appearance and spatial structure. By using the sub-graphs inherited from training images, as well as the cues obtained from point clouds, this approach can effectively interpret the street scene via a guided MRF inference. We further introduce low-rank regularization into the graph matching and reformulate the low-rank graph matching problem into a standard semidefinite proragmming problem, which is much easier to solve. The matching performance is enhanced and experimental results show a promising performance of our approach.

  • Gunho Sohn, York Univ.

    "3D Infrastructure Scene Reconstruction Using Laser Point Clouds

    Abstract:
    LiDAR (Light Detection and Ranging) is an emerging remote sensing technology that directly measures the distance between the sensor and a target surface using the latest time-of-flight technology, thus providing massive and highly accurate three-dimensional point clouds. Over the last decade, LiDAR has been rapidly adopted as a primary sensor in Geomatics community for supporting a wide range of applications such as bathymetry, forestry, mining, ecology, topographic mapping, and engineering. One of primary research interests in Geomatics is to reconstruct “As-Built” infrastructure models, approximating the existing infrastructure conditions modelled with semantically rich primitives. Having such accurate model representation allows us to conduct high-precision risk analysis, inventory update and management. However, like many other vision tasks, automatically generating large-scale “As-Built” models still remains unresolved research problems. Thus, today’s practice used for the infrastructure management heavily relies on human-centric and time consuming process. This presentation will introduce the latest research activities at York University, studying image understanding and model reconstruction of building facades and rooftops, single trees, railways and power lines using LiDAR point clouds.

Autonomous Robots

Thurs. June 4, 9:00-10:00 AM

  • Tim Barfoot, Univ. of Toronto

    "Long-Term Visual Route Following for Mobile Robots"

    Abstract:
    I will describe a particular approach to visual route following for mobile robots that we have developed, called Visual Teach & Repeat (VT&R), and what I think the next steps are to make this system usable in real-world applications. We can think of VT&R as a simple form of simultaneous localization and mapping (without the loop closures) along with a path-tracking controller; the idea is to pilot a robot manually along a route once and then be able to repeat the route (in its own tracks) autonomously many, many times using only visual feedback. VT&R is useful for such applications as load delivery (mining), sample return (space exploration), and perimeter patrol (security). Despite having demonstrated this technique for over 500 km of driving on several different robots, there are still many challenges we must meet before we can say this technique is ready for real-world applications. These include (i) visual scene changes such as lighting, (ii) physical scene changes such as path obstructions, and (iii) vehicle changes such as tire wear. I’ll discuss our progress to date in addressing these issues and the next steps moving forward.

  • Howard Li, Univ. of New Brunswick

    "Perception, Navigation and Target Localization for Autonomous UAVs"

    Abstract:
    Unmanned Aerial Vehicles (UAVs) and robots usually are related to situations involving hazardous environments, repetitive and menial tasks. UAVs can be used in many areas, such as surveillance, forestry management, mine hunting, automatic inspection of power plants and refineries, and disposal of hazardous materials. In this talk, we will present our current research in UAVs and robotics. We will present the sensing, perception, navigation, and localization methods. Simultaneous Localization and Mapping algorithms will be introduced. Results of our current research in robotics and unmanned vehicles will be presented.

Vision for Graphics

Thurs. June 4, 2:30-3:30 PM

  • Jean-Francois Lalonde, Laval Univ.

    "Understanding outdoor lighting in vision and graphics"

    Abstract:
    Outdoor illumination creates challenges for computer vision and graphics alike. In vision, algorithms routinely get confused by strong shadows, highlights, and glare. In graphics, simulating the extreme dynamic range of outdoor lighting needs to be done accurately to realistically synthesize these effects. In this talk, I will present approaches that aim to improve our understanding of natural lighting with applications in both vision and graphics. First, I will briefly present approaches that rely on a physically-based illumination model to infer scene and illumination properties from time-lapse sequences and single images, by explicitly reasoning about the illumination conditions. Second, I will present recent work that relies on a data-driven model, trained on a novel dataset of 8,000+ HDR photographs of daytime skies. We leverage this new dataset to 1) automatically estimate the illumination conditions in image collections, which allows us to seamlessly insert virtual objects in the images, and 2) characterize the behavior of photometric stereo under natural lighting.

  • Minglun Gong, Memorial Univ.

    "Modeling and analyzing 3D shapes using clues from 2D images"

    Abstract:
    An image worth a thousand words. From images, we humans are able to infer the 3D shape of an object and to decompose the object into semantically meaningful parts. Now, is it possible to teach computers to do these tasks? Two recent research projects that work along this direction will be presented in this talk. The first one investigates how the 3D modeling of flower head can be facilitated using a single photo of the flower. The core idea is that flower head typically consists of petals of similar 3D geometries, yet their observed shapes on 2D images vary due to differences in projecting directions. Exploiting this variation allows us to reconstruct the 3D geometry of the petals from a single image. The second project studies how to segment 3D models into semantically meaningful parts based on knowledge learned from labeled 2D images. Here the input 3D model is treated as a collection of 2D projections, which are labeled using training images of similar objects. The 3D model is then segmented by summarizing the labeling for its projections. Here the key is, for each query projection, how to retrieve objects with similar semantic parts and transfer their labels over.

Intelligent Vehicles

Thurs. June 4, 4 PM - 5 PM

  • Raquel Urtasun, Univ. of Toronto

    "Towards Affordable Self Driving Cars"

    Abstract:
    Developing autonomous systems that are able to assist humans in everyday's tasks is one of the grand challenges in modern computer science. Notable examples are personal robotics for the elderly and people with disabilities, as well as autonomous driving systems which can help decrease fatalities caused by traffic accidents. In order to perform tasks such as navigation, recognition and manipulation of objects, these systems should be able to efficiently extract 3D knowledge of their environment. In this talk, I'll show how graphical models provide a great mathematical formalism to extract this knowledge. In particular, I'll focus on a few examples, including 3D reconstruction, 3D object and layout estimation and self-localization.

  • Steven Beauchemin, Western University

    Title: "Vehicular Instrumentation for the Study of Driver Intent and Related Applications"

    Abstract:
    We describe a vehicular instrumentation for the study of driver intent. Our instrumented vehicle is capable of recording the 3D gaze of the driver and relating it to the frontal depth map obtained with a stereo system in real-time, including the sum of vehicular parameters actuator motion, speed, and other relevant driving parameters. Additionally, we describe other real-time algorithms that are implemented in the vehicle, such as a frontal vehicle recognition system, a free lane space estimation method, and a GPS position-correcting technique using lane recognition as land marks.

Vision and Learning

Fri. June 5, 9:00 AM- 10:00 AM

  • Graham Taylor, Univ. of Guelph

    "Learning Representations with Multiplicative Interactions"

    Abstract:
    Representation learning algorithms are machine learning algorithms which involve the learning of features or explanatory factors. Deep learning techniques, which employ several layers of representation learning, have achieved much recent success in machine learning benchmarks and competitions, however, most of these successes have been achieved with purely supervised learning methods and have relied on large amounts of labeled data. In this talk, I will discuss a lesser-known but important class of representation learning algorithms that are capable of learning higher-order features from data. The main idea is to learn relations between pixel intensities rather than the pixel intensities themselves by structuring the model as a tri-partite graph which connects hidden units to pairs of images. If the images are different, the hidden units learn how the images transform. If the images are the same, the hidden units encode within-image pixel covariances. Learning such higher-order features can yield improved results on recognition and generative tasks. I will discuss recent work on applying these methods to structured prediction problems.

  • Yang Wang, Univ. of Manitoba

    "Recognizing and Localizing Novel Objects"

    Abstract:
    A lot of progress has been made in object recognition in the last few years. Now we have reasonably accurate systems that can recognize thousands of object categories. We also have good object detectors for a handful of object classes. However, since the number of object is so big and new object classes might emerge over time, it is not clear whether the standard supervised learning approach is the final solution for object recognition. In this talk, I will discuss our recent work on transfer learning for recognizing and localizing objects for which we do not have training data.

Object Detection

Fri. June 5, 11:00 AM- 12:00 PM

  • Sven Dickinson, Univ. of Toronto

    "Detecting Symmetric Parts in Cluttered Scenes"

    Abstract:
    Perceptual grouping played a prominent role in support of early object recognition systems, which typically took an input image and a database of shape models and identified which of the models was visible in the image. Using intermediate-level shape priors, causally related shape features were grouped into discriminative shape indices that were used to prune the database down to a few promising candidates that might account for the query. In recent years, however, the recognition (categorization) community has focused on the object detection problem, in which the input image is searched for a specific target object. Since indexing is not required to select the target model, perceptual grouping is not required to construct a discriminative shape index. As a result, perceptual grouping activity at our major conferences has diminished. However, there are clear signs that the recognition community is moving from appearance back to shape, and from detection back to multi-class object categorization. Shape-based perceptual grouping will play a critical role in facilitating this transition. One of the most powerful mid-level shape priors is symmetry, which forms the basis for many approaches to part-based object modeling and recognition. In this talk, I will review our recent progress on detecting symmetric parts in cluttered scenes.

  • Sanja Fidler, Univ. of Toronto

    "Understanding Complex Scenes and People That Talk about Them"

    Abstract:
    Language is an important link between high level semantic concepts and more low level visual perception. A successful robotic platform needs to both, understand the visual world and the lingual instructions given by the human user in order to react appropriately. In this talk, I'll present our recent work on 3D understanding of indoor scenes, and show how natural sentential descriptions can be exploited to improve 3D visual parsing.

Human-Robot Interaction and Assistive Tech

Fri. June 5, 4:00 PM- 5:00 PM

  • Dana Kulic, Univ. of Waterloo

    "Human Motion Analysis for Rehabilitation"

    Abstract:
    Mobility improvement for patients is one of the primary concerns of physiotherapy rehabilitation. Providing the physiotherapist and the patient with a quantified and objective measure of progress can be beneficial for monitoring the patient's performance and providing guidance and feedback. In this talk, we describe a system for data collection, on-line pose estimation, segmentation and user interface for patients undergoing lower body rehabilitation. An approach for quantifying patient performance is also introduced. Results from multiple studies evaluating the system with patients undergoing rehabilitation following joint replacement surgery will be presented.

  • Babak Taati, Toronto Rehabilitation Institute

    "Computer vision systems in dementia care"

    Abstract:
    Computer vision systems can play a role in providing care to individuals living with dementia. In this talk, I will first briefly review vision-based systems to provide assistance to older adults with dementia and to assist with usability studies for this population. I will then present preliminary results on assessing the cognitive status of older adults by way of monitoring common activities of daily living. Early identification of dementia can potentially lead to improved quality of life both for older adults with dementia and their family and caregivers who can better plan informal/formal care in advance.

Links to Previous Conferences

This page archives aa historical content from past CRV meetings. A second source for some of this information is maintained at the CIPPRS website.