13th Conference on Computer and Robot Vision

Victoria, British Columbia.   June 1-3, 2016.

Welcome to the home page for CRV 2016 which will be held at the University of Victoria, Victoria, British Columbia.

CRV is an annual conference hosted in Canada, and co-located with Graphics Interface (GI) and Artificial Intelligence (AI). A single registration covers attendance in all three conferences. Please see the AI/GI/CRV general conference website for more information.


  • July 7, 2015: The 2016 Conference dates are now up to date.
  • August 27, 2015: Call for papers is now available.
  • August 27, 2015: Keynote speakers announced: Prof. Jim Little, UBC and Prof. Laurent Itti, USC will be giving the keynotes at CRV 2016. For more information, please checkout the Keynotes tab.

Important Dates:

Paper submission February 21, 2016, 11:59 PM PST
Acceptance/rejection notification March 25, 2016
Revised camera-ready papers April 8, 2016
Early registration April 19, 2016
Conference June 1-3, 2016

Conference History

In 2004, the 17th Vision Interface conference was renamed the 1st Canadian Conference on Computer and Robot Vision. In 2011, the name was shortened to Conference on Computer and Robot Vision.

CRV is sponsored by the Canadian Image Processing and Pattern Recognition Society (CIPPRS).


CRV 2016 Program

Submission Instructions

The paper submission deadline for CRV 2016 is 21 February 2016, 11:59 PST. Please note that this is a firm deadline

Please refer to the Call For Papers for information on the goals and scope of CRV.

We will be posting information about CRV submission site shortly.

The CRV review process is single-blind: authors are not required to anonymize submissions. Submissions must be between 4 to 8 pages (two-column) long. Submissions less than 6 pages will most likely be considered for poster sessions only. Use the following templates to prepare your CRV submissions:

CRV 2016 Co-Chairs

  • Faisal Qureshi, University of Ontario Institute of Technology
  • Steven L. Waslander, University of Waterloo

CRV 2016 Program Committee

  • Mohand Said Allili, Université du Québec en Outaoauis, Canada
  • Robert Allison, York University, Canada
  • Alexander Andreopoulos, IBM Research, Canada
  • John Barron, University of Western Ontario, Canada
  • Steven Beauchemin, University of Western Ontario, Canada
  • Robert Bergevin, Université Laval, Canada
  • Guillaume-Alexandre Bilodeau, École Polytechnique Montréal, Canada
  • Pierre Boulanger, University of Alberta, Canada
  • Jeffrey Boyd, University of Calgary, Canada
  • Marcus Brubaker, University of Toronto, Canada
  • Gustavo Carneiro, University of Adelaide, Australia
  • James Clark, McGill University, Canada
  • David Clausi, University of Waterloo, Canada
  • Jack Collier, DRDC Suffield, Canada
  • Kosta Derpanis, Ryerson University, Canada
  • James Elder, York University, Canada
  • Mark Eramian, University of Saskatchewan, Canada
  • Paul Fieguth, Waterloo, Canada
  • Brian Funt, Simon Fraser University, Canada
  • Philippe Giguère, Laval University, Canada
  • Yogesh Girdhar, Woods Hole Oceanographic Institute, USA
  • Minglun Gong, Memorial University of Newfoundland, Canada
  • Michael Greenspan, Queens University, Canada
  • Jessy Hoey, University of Waterloo, Canada
  • Andrew Hogue, University of Ontario Institute of Technology, Canada
  • Randy Hoover, South Dakota School of Mines and Technology, USA
  • Martin Jagersand, University of Alberta, Canada
  • Michael Jenkin, York University, Canada
  • Hao Jiang, Boston College, USA
  • Pierre-Marc Jodoin, Université de Sherbrooke, Canada
  • Jonathan Kelly, University of Toronto, Canada
  • Dana Kulic, University of Waterloo, Canada
  • Robert Laganière, University of Ottawa, Canada
  • Jean-Francois Lalonde, Laval University, Canada
  • Tian Lan, Stanford University, USA
  • Jochen Lang, University of Ottawa, Canada
  • Michael Langer, McGill University, Canada
  • Cathy Laporte, ETS Montreal, Canada
  • Denis Laurendeau, Laval University, Canada
  • Howard Li, University of New Brunswick, Canada
  • Jim Little, University of British Columbia, Canada
  • Shahzad Malik, University of Toronto, Canada
  • Scott McCloskey, Honeywell Labs, USA
  • David Meger, McGill University, Canada
  • Jean Meunier, Universite de Montreal, Canada
  • Max Mignotte, Universite de Montreal, Canada
  • Gregor Miller, University of British Columbia, Canada
  • Greg Mori, Simon Fraser University, Canada
  • Christopher Pal, École Polytechnique Montréal, Canada
  • Pierre Payeur, University of Ottawa, Canada
  • Cédric Pradalier, Georgia Tech. Lorraine, France
  • Yiannis Rekleitis, University of South Carolina, USA
  • Sebatien Roy, Université de Montréal, Canada
  • Junaed Sattar, Clarkson University, USA
  • Christian Scharfenberger, University of Waterloo, Canada
  • Hicham Sekkati, University of Waterloo, Canada
  • Kaleem Siddiqi, McGill University, Canada
  • Gunho Sohn, York University, Canada
  • Minas Spetsakis, York University, Canada
  • Uwe Stilla, Technische Universitaet Muenchen, Germany
  • Graham Taylor, University of Guelph, Canada
  • John Tsotsos, York University, Canada
  • Olga Veksler, University of Western Ontario, Canada
  • Ruisheng Wang, University of Calgary, Canada
  • Yang Wang, University of Manitoba, Canada
  • Alexander Wong, Waterloo University, Canada
  • Robert Woodham, University of British Columbia
  • Yijun Xiao, University of Edinburgh United Kingdom
  • Alper Yilmaz, Ohio State University, USA
  • John Zelek, University of Waterloo Ontario, Canada
  • Hong Zhang, University of Alberta, Canada

CIPPRS Executive

  • President: Gregory Dudek, McGill University
  • Treasurer: John Barron, Western University
  • Secretary: Jim Little, University of British Columbia

Keynote Speakers

We will have two Keynote speakers in 2016:

Laurent Itti, University of University of Southern California

Laurent Itti

Title: Computational modeling of visual attention and object recognition in complex environments


Visual attention and eye movements in primates have been widely shown to be guided by a combination of stimulus-dependent or 'bottom-up' cues, as well as task-dependent or 'top-down' cues. Both the bottom-up and top-down aspects of attention and eye movements have been modeled computationally. Yet, is is not until recent work which I will describe that bottom-up models have been strictly put to the test, predicting significantly above chance the eye movement patterns, functional neuroimaging activation patterns, or most recently neural activity in the superior colliculus of monkeys inspecting complex dynamic scenes. In recent developments, models that increasingly attempt to capture top-down aspects have been proposed. In one system which I will describe, neuromorphic algorithms of bottom-up visual attention are employed to predict, in a task-independent manner, which elements in a video scene might more strongly attract attention and gaze. These bottom-up predictions have more recently been combined with top-down predictions, which allowed the system to learn from examples (recorded eye movements and actions of humans engaged in 3D video games, including flight combat, driving, first-person, or running a hot-dog stand that serves hungry customers) how to prioritize particular locations of interest given the task. Pushing deeper into real-time, joint online analysis of video and eye movements using neuromorphic models, we have recently been able to predict future gaze locations and intentions of future actions when a player is engaged in a task. Finally, employing deep neural networks, we show how neuroscience-inspired algorithms can also achieve state-of-the art results in the domain of object recognition, especially over a new dataset collected in our lab and comprising ~22M images of small objects filmed on a turntable, with available pose information that can be used to enhance training of the object recognition model.


Dr. Laurent Itti is a Full Professor of Computer Science, Psychology, and Neuroscience at the University of Southern California. Dr. Itti's research interests are in biologically-inspired computational vision, in particular in the domains of visual attention, scene understanding, control of eye movements, and surprise. This basic research has technological applications to, among others, video compression, target detection, and robotics. His work on visual attention and saliency has been widely adopted, with an explosion of applications not only in neuroscience and psychology, but also in machine vision, surveillance, defense, transportation, medical diagnosis, design and advertising. Itti has co-authored over 150 publications in peer-reviewed journals, books and conferences, three patents, and several open-source neuromorphic vision software toolkits.

Jim Little, University of British Columbia

Jim Little

Title: "TBA"


Coming soon


Coming soon


CRV 2015 will feature eight exciting symposia on subtopics related to computer and robot vision.

Field Robotics

  • Michael Jenkin, York Univ.

    "Coming soon"

    Coming soon.

  • David Meger, McGill Univ.

    "Coming soon"

    Coming soon.

Robotic Vision

  • Jonathan Kelly, Univ. of Toronto

    "Coming soon"

    Coming soon.

  • Alan Mackworth, Univ. of British Columbia

    "Coming soon

    Coming soon.


  • Hong Zhang, Univ. of Alberta

    "Consensus Constraint - A Method for Pruning Outliers in Keypoint Matching"

    In this talk, I will describe a simple yet effective outlier pruning method for keypoint matching that is able to perform well under significant illumination changes, for applications including visual loop closure detection in robotics. We contend and verify experimentally that a major difficulty in matching keypoints when illumination varies significantly between two images is the low inlier ratio among the putative matches. The low inlier ratio in turn causes failure in the subsequent RANSAC algorithm since the correct camera motion has as much support as many of the incorrect ones. By assuming a weak perspective camera model and planar camera motion, we derive a simple constraint on correctly matched keypoints in terms of the flow vectors between the two images. We then use this constraint to prune the putative matches to boost the inlier ratio significantly thereby giving the subsequent RANSAC algorithm a chance to succeed. We validate our proposed method on multiple datasets, to show convincingly that it can deal with illumination change effectively in many computer vision and robotics applications where our assumptions hold true, with a superior performance to state-of-the-art keypoint matching algorithms.

  • Philippe Giguere, Univ. of Laval

    "Coming soon

    Coming soon.


  • Richard Zemel, Univ. of Toronto

    "Coming soon"

    Coming soon.

  • Bryan Tripp, Univ. of Waterloo

    "Coming soon"

    Coming soon.

Medical Imaging

  • Tal Arbel, McGill Univ.

    "Iterative Hierarchical Probabilistic Graphical Model for the Detection and Segmentation of Multiple Sclerosis Lesions in Brain MRI"

    Probabilistic graphical models have been shown to be effective in a wide variety of segmentation tasks in the context of computer vision and medical image analysis. However, segmentation of pathologies can present a series of additional challenges to standard techniques. In the context of lesion detection and segmentation in brain images of patients with Multiple Sclerosis (MS), for example, challenges are numerous: lesions can be subtle, heterogeneous, vary in size and can be very small, often have ill-defined borders, with intensity distributions that overlap those of healthy tissues and vary depending on location within the brain. In this talk, recent work on multi-level, probabilistic graphical models based on Markov Random Fields (MRF) will be described to accurately detect and segment lesions and healthy tissues in brain images of patients with MS. Robustness and accuracy of the methods are illustrated through extensive experimentation on very large, proprietary datasets of real, patient brain MRI acquired during multicenter clinical trials. Recent work on the successful adaptation of the method to the problem of brain tumour detection and segmentation into sub-classes will also be discussed.

  • Mehran Ebrahimi, Univ. of Ontario Inst. of Tech.

    "Coming soon"

    Coming soon.

New Applications

  • Alexandra Albu, Univ. of Victoria

    "Computer Vision for Environmental Monitoring"

    Coming soon.

  • James J. Clark, McGill Univ.

    "Color Sensing and Display at Low Light Levels"

    The measurement and display of color information becomes challenging at low light levels. At low light levels sensor noise becomes significant, creating difficulties in estimating color quantities such as hue and saturation. On the display side, especially in mobile devices, low brightness levels are often desired in order to minimize power consumption. The human visual system perceives color in a different manner at low light levels than at high light levels, and so display devices operating at low brightness systems should account for this. This talk will cover recent developments in the area of modeling low light level color perception in humans, and the application of these models to intelligent display technology.

Human Robot Interaction

  • Richard Vaughan, Simon Fraser Univ.

    "Multi-Human, Multi-Robot Interaction at Various Scales"

    Most HRI work is one-to-one face-to-face over a table. I'll describe my group's work on the rest of the problem, with multiple robots and people, indoors and outdoors, at ranges from 1m to 50m, on ground robots and UAVs. We have focused particularly on having humans and robots achieve mutual attention as a prerequisite for subsequent one-on-one interaction, and how uninstrumented people can create and modify robot teams. I'll also suggest that sometimes robots should not do as they are told, for the user's own good.

  • Elizabeth Croft, Univ. of British Columbia

    "Up close and personal with human-robot collaboration"

    Advances in robot control, sensing and intelligence are rapidly expanding the potential for close-proximity human-robot collaborative work. In many different contexts, from manufacturing assembly to home care settings, a robot’s potential strength, precision and process knowledge can productively complement human perception, dexterity and intelligence to produce a highly coupled, coactive, human-robot team. Such interactions, however, require task-appropriate communication cues that allow each party to quickly share intentions and expectations around the task. These basic communication cues allow dyads, human-human or human-robot, to successfully and robustly pass objects, share spaces, avoid collisions and take turns – some of the basic building blocks of good, safe, and friendly collaboration regardless of one’s humanity. In this talk we will discuss approaches to identifying, characterizing, and implementing communicative cues and validating their impact in human-robot interaction scenarios.

3D Vision

Links to Previous Conferences

This page archives aa historical content from past CRV meetings. A second source for some of this information is maintained at the CIPPRS website.

Photo Credit:
Description : 2009-0605-Victoria-Harbor-PAN
Credit : © Bobak Ha'Eri - Own work. Licensed under CC BY 3.0 via Wikimedia Commons