Interleaved Group Convolutions for Efficient Deep Neural Networks

主讲人 Speaker:王井东
时间 Time: 周五16:30-17:30,2018-12-7
地点 Venue:清华大学近春园西楼三层报告厅

摘要 Abstract

Eliminating the redundancy in convolution kernels has been attracting increasing interests for designing efficient convolutional neural network architectures with three goals: small model, fast computation, and high accuracy. Existing solutions include low-precision kernels, structured sparse kernels, low-rank kernels, and the product of low-rank kernels. In this talk, I will introduce a novel framework: Interleaved Group Convolution (IGC), which uses the product of structured sparse kernels to compose a dense convolution kernel. It is a drop-in replacement of normal convolution and can be applied to any networks that depend on convolution. I present the complementary condition and the balance condition to guide the design, obtaining a balance between three aspects: model size, computation complexity and classification performance. I will show empirical and theoretic justification of the advantage of the proposed approach over Xception and MobileNet. In addition, IGC raises a rarely-studied matrix decomposition problem: sparse matrix factorization (SMF). I expect more research efforts in SMF from the researchers in the area of matrix analysis.

报告人简介 Profile

王井东,微软亚洲研究中心视觉计算组高级研究员。王井东研究员的主要研究领域包括CNN架构设计、再度识别、人体姿态估计、语义分割、大规模索引和显著目标检测等,出版著作1部并在计算机视觉、多媒体和机器学习领域相关顶级会议和著名国际期刊发表论文100多篇,曾入选2015ACM MM决赛最佳论文。在技术方面,为微软产品提供了多项技术,包括必应搜索、认知服务和小冰聊天机器人。同时,王井东研究员担任国际核心期刊IEEE TPAMIIEEE TCSVTIEEE TMM副主编,并曾任CVPRICCVECCVAAAIIJCAIACM Multimedia等顶级国际会议的区域主席或资深项目委员会成员。目前是ACM的杰出成员,也是IAPR的成员之一。