Hinton is world-renowned for his work with neural nets, and this research has profound implications for areas such as speech recognition, computer vision and language understanding.
“Geoffrey Hinton’s research is a magnificent example of disruptive innovation with roots in basic research,” said U of T’s president, Professor David Naylor. “The discoveries of brilliant researchers, guided freely by their expertise, curiosity, and intuition, lead eventually to practical applications no one could have imagined, much less requisitioned.
Recently, Krizhevsky and Sutskever, who will both be moving to Google, developed a system that dramatically improved the state of the art in object recognition.
The Google deal will support Prof. Hinton’s graduate students housed in the department’s machine learning group, while protecting their research autonomy under academic freedom. It will also allow Prof. Hinton himself to divide his time between his university research and his work at Google.
“I am extremely excited about this fantastic opportunity to keep my research here in Toronto and, at the same time, help Google apply new developments in deep learning to make systems that help people,” said Professor Hinton.
Professor Hinton will spend time at Google’s Toronto office and several months of the year at Google’s headquarters in Mountain View, CA.
This announcement comes on the heels of a $600,000 gift Google awarded Professor Hinton’s research group to support further work in the area of neural nets.
ImageNet Classiﬁcation with Deep Convolutional Neural Networks (9 page paper)
Averaging the predictions of two CNNs that were pre-trained on the entire Fall 2011 release with the aforementioned ﬁve CNNs gives an error rate of 15.3%. The second-best contest entry achieved an error rate of 26.2% with an approach that averages the predictions of several classiﬁers trained on FVs computed from different types of densely-sampled features.
The results show that a large, deep convolutional neural network is capable of achieving recordbreaking results on a highly challenging dataset using purely supervised learning. It is notable that our network’s performance degrades if a single convolutional layer is removed. For example, removing any of the middle layers results in a loss of about 2% for the top-1 performance of the network. So the depth really is important for achieving our results.
To simplify our experiments, we did not use any unsupervised pre-training even though we expect that it will help, especially if we obtain enough computational power to signiﬁcantly increase the size of the network without obtaining a corresponding increase in the amount of labeled data. Thus far, our results have improved as we have made our network larger and trained it longer but we still have many orders of magnitude to go in order to match the infero-temporal pathway of the human visual system. Ultimately we would like to use very large and deep convolutional nets on video
sequences where the temporal structure provides very helpful information that is missing or far less obvious in static images.
Other Google Moves in Machine Learning, AI and object recognition and classification
Google recently acquired Viewdle, which owns a number of patents on facial recognition, following its acquisition of two similar startups in PittPatt in 2011 and Neven Vision all the way back in 2006.
Google hired Ray Kurzweil to lead an effort to develop breakthrough artificial intelligence
Google has been working with Dwave Systems (adiabatic quantum computers) to help remove outliers in image classification using quantum systems and speed up and improve machine learning.
SOURCES - Techcrunch, University of Toronto
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks