Call for paper
Supporting an active and independent life style at home has become a prominent concern in modern society.
Active and Assisted Living systems (AAL) support an independent and healthy lifestyle for individuals, particularly beneficial for the elderly and disabled.
Due to the high demand of such systems, monitoring and assistance methodologies are being actively researched in computer vision.
Although great progress has been made towards more intelligent and personalized AAL systems, many open problems remain.
Uncontrolled variations found in real world environments, protection of user privacy, effective user interaction with computers and environments, are all examples of challenges that need to be overcome to allow the wide deployment of successful AAL systems in our societies.
This workshop aims to provide a platform for discussing the problems and recent solutions associated with the development of AAL systems for real life settings.
Potential collaborations to solve the challenging open problems are encouraged to push the boundaries of the field.
Topics of interest include but are not limited to the followings:
- Action and activity monitoring and recognition
- Gait analysis
- Human-environment interaction
- Human-machine interaction
- Assistive robotics
- Applications for elderly
- Applications for functional mobility disorders
- Fall detection and prevention
- Human perception and emotion understanding
- Outdoor monitoring and assistance
- Multi-visual sensor network and topology
Full paper submission extended deadline: January 21,2018
Decision Notification: January 30, 2018
Camera-ready Deadline: February 2, 2018
Workshop Date: March 15, 2018
Full paper (8 pages)
In submitting a full paper to the workshop, the authors acknowledge that no paper substantially similar in content has been or will be submitted to a journal, another conference or workshop during the review period.
Please follow the paper submission website to submit your manuscript to the regular paper track: https://cmt3.research.microsoft.com/CVAAL2018/Submission/Index.
Each paper will be peer reviewed by at least two reviewers.
Submissions should adhere to the main WACV 2018 proceedings style, and have a maximum length of 8 pages excluding reference.
Please refer to the guidelines ( http://wacv18.uccs.us/submissions/authors/) provided by WACV 2018 for more details.
- The deadline for submitting full papers is January 21, 2018 (11:59pm Pacific time).
- In case of rejection from WACV, authors are allowed to re-submit their work to the workshop by February 2, 2018, in a dedicated submission track of the workshop's submission system ( https://cmt3.research.microsoft.com/CVAAL2018/Submission/Index).
Short paper (2 pages)
We invite 2-page abstracts presenting relevant work that has been recently published, or is in progress, or is to be presented at the WACV main conference.
While there will be no formal proceedings, accepted abstracts will be posted on this workshop website.
Authors of accepted abstracts will present their work in a oral or poster session at the workshop.
8:30-8:50: Breakfast and registration
8:50-9:00: Opening and welcome
9:00-9:50: Keynote talk by Henry Medeiros
9:50-10:30: Short presentation of accepted papers (5 min teasers)
10:30-11:00: Poster session and coffee break
11:00-11:20: Invited talk by Joe Yue-Hei Ng
11:20-12:00: Panel discussion
Incorporating Domain Knowledge in the Design of Vision-based Assisted Living Systems
The ability to understand dynamic environments is an essential
requirement of assisted living systems, but despite the significant
research efforts that have been devoted to the development of
vision-based smart environments, most existing systems are still
largely limited to performing relatively simple tasks within somewhat
controlled environments. Deep learning techniques are now making it
possible to solve longstanding computer vision problems with the
robustness and flexibility required to devise practical assisted living
systems. However, these techniques are notoriously dependent on the
availability of massive amounts of data that closely reflects the
application under consideration, and cannot be directly applied to
systems for which it is impractical to collect such data beforehand. In
this talk, I will discuss how we can apply recent machine learning
methods to perform low level computer vision tasks such as object
detection, recognition, segmentation, and tracking, by leveraging our
knowledge of the known aspects of the environment. By incorporating
domain knowledge into our algorithms, it is possible to devise assisted
living systems that achieve performance levels comparable to those
obtained in traditional computer vision benchmark datasets but without
the need to generate such large manually annotated datasets to each
Henry Medeiros is an Assistant Professor of Electrical and Computer
Engineering at Marquette University. His research interests include
computer vision, robotics, and signal processing. His work focuses on
the application of machine learning and signal processing techniques to
solve problems of practical relevance and has been carried out in
collaboration with industry partners as well as federal agencies such
as the United States Department of Agriculture and the National
Institute of Standards and Technology. He has published over thirty
journal and peer-reviewed conference papers and holds several US and
international patents. Before joining Marquette, he was a Research
Scientist at the School of Electrical and Computer Engineering at
Purdue University and the Chief Technology Officer of Spensa
Technologies, a technology start-up company located at the Purdue
Research Park. At Spensa, he led a team that designed automated
insect monitoring systems for agricultural crops, which are now being
used in four continents. He received his Ph.D. from the School of
Electrical and Computer Engineering at Purdue University as a
Paper invited from the main conference
ActionFlowNet: Learning Motion Representation for Action Recognition
We present a data-efficient representation learning approach to learn video representation with small amount of labeled data. We propose a multitask learning model ActionFlowNet to train a single stream network directly from raw pixels to jointly estimate optical flow while recognizing actions with convolutional neural networks, capturing both appearance and motion in a single model. Our model effectively learns video representation from motion information on unlabeled videos. Our model significantly improves action recognition accuracy by a large margin (23.6%) compared to state-of-the-art CNN-based unsupervised representation learning methods trained without external large scale data and additional optical flow input. Without pretraining on large external labeled datasets, our model, by well exploiting the motion information, achieves competitive recognition accuracy to the models trained with large labeled datasets such as ImageNet and Sport-1M.
Full papers to be published in WACV proceedings:
- Elizabeth Tran; Michael Mayhew; Alan Kaplan; Hyojin Kim; Piyush Karande. Facial Expression Recognition Using a Large Out-of-Context Dataset
- David Filliat; Panagiotis Papadakis. Generic Object Discrimination for Mobile Assistive Robots using Projective Light Diffusion
- Tilo Burghardt; Lili Tao; Majid Mirmehdi; Baodong Wang. Calorific Expenditure Estimation using Deep Convolutional Network Features
- Bhanu Singh; Manya Wadhwa; Saqib N Shamsi. Group Affect Prediction Using Multimodal Distributions
- Svati Dhamija; Terrance E Boult. Learning Visual Engagement for Trauma Recovery
Short papers to be published on the workshop's website:
- Xiang Xiang; Trac Tran; Ye Tian; Gregory D. Hager. Assessing Pain Levels from Videos Using Temporal Convolutional Networks