Wu Wei compiled from Google Research Blog
Quotation Production | Public Number QbitAI
This morning, Google posted a paper on its own research blog, announcing open source MobileNets, a set of mobile-first computer vision models. With TensorFlow Mobile, these models can run efficiently on mobile devices while offline.
The qubits compile the original text as follows:
In recent years, with the continuous advancement of visual recognition technology by neural networks, deep learning has contributed too much to the advancement of computer vision. Many of these technologies, including the recognition of objects, landmarks, logos, and text, are implemented on networked devices through the Cloud Vision API.
However, we believe that the increasing computing power of mobile devices will likely allow users to access these technologies whenever and wherever they are offline. However, visual recognition on device-side and embedded applications faces many challenges—in resource-constrained environments, these models must use limited computing power, power, and space to ensure speed and accuracy.
Today we are pleased to announce the opening of MobileNets, a computer vision model package for TensorFlow that is mobile-first and designed to take into account the first resources on the device side and embedded applications in an effort to maximize accuracy. MobileNets features small scale, low latency, low power consumption and parametric design of resource constraints in many different application cases. Like the mainstream large models like Inception, these models can also be used for tasks such as classification, detection, embedding, and segmentation.
This open source includes the model definition for MobileNets, which uses TF-Slim on TensorFlow and 16 other pre-trained ImageNet classification checkpoints for full-scale mobile projects. With TensorFlow Mobile, these models can run efficiently on mobile devices.
△ Select the appropriate MobileNet model based on your expected delay and model size. The space occupied by the neural network in memory and on disk is proportional to the number of parameters. The network’s latency and power consumption scale correspond to the number of multiply-accumulate plus operations (MACs). The accuracy of Top-1 and Top-5 was measured on the ILSVRC data set.
Core contributors: Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam
Special thanks to: Benoit Jacob, Skirmantas Kligys, George Papandreou, Liang-Chieh Chen, Derek Chow, Sergio Guadarrama, Jonathan Huang, Andre Hentz, Pete Warden
Official guide for TensorFlow Mobie:
One More Thing…
What else is worth paying attention to today in the AI world? In the qubit (QbitAI) public number dialogue interface, reply “Today” to see the AI industry and research trends collected by our entire network. Refill~