Difference between revisions of "Inference Inside OpenCV or Dlib"

From Grassland Wiki
Jump to navigation Jump to search
 
Line 1: Line 1:
== Reasons Not To Perform Inference With OpenCV's or Dlib's DNN ==
== Reasons Not To Perform Inference With OpenCV's or Dlib's DNN ==


OpenCV’s DNN will accept TensorFlow models and pytorch models but it seems like there are problems getting it to use CUDA and NVIDIA's GPU’s<ref>https://www.learnopencv.com/face-detection-opencv-dlib-and-deep-learning-c-python/ (Section 6. Speed Comparison)</ref><ref>https://answers.opencv.org/question/201456/how-to-run-opencv-dnn-on-nvidia-gpu/</ref> which is one of the reasons why we use an external neural network server  
OpenCV’s DNN will accept TensorFlow models and pytorch models but it seems like there are problems getting it to use CUDA and NVIDIA's GPU’s<ref>https://www.learnopencv.com/face-detection-opencv-dlib-and-deep-learning-c-python/ (Section 6. Speed Comparison)</ref><ref>https://answers.opencv.org/question/201456/how-to-run-opencv-dnn-on-nvidia-gpu/</ref> which is one of the reasons why we use an external neural network server. ('''UPDATE:''' This appears to no longer be true. As of October 21st, 2019, OpenCV now supports CUDA and NVIDIA GPU's<ref>https://github.com/opencv/opencv/pull/14827</ref>)


Dlib’s DNN can use Nvidia GPU’s internally but can’t import any models but its own.<ref>https://github.com/davisking/dlib/issues/469</ref> So no TF, Pytorch or even Caffe.
Dlib’s DNN can use Nvidia GPU’s internally but can’t import any models but its own.<ref>https://github.com/davisking/dlib/issues/469</ref> So no TF, Pytorch or even Caffe.

Latest revision as of 16:00, 20 February 2020

Reasons Not To Perform Inference With OpenCV's or Dlib's DNN

OpenCV’s DNN will accept TensorFlow models and pytorch models but it seems like there are problems getting it to use CUDA and NVIDIA's GPU’s[1][2] which is one of the reasons why we use an external neural network server. (UPDATE: This appears to no longer be true. As of October 21st, 2019, OpenCV now supports CUDA and NVIDIA GPU's[3])

Dlib’s DNN can use Nvidia GPU’s internally but can’t import any models but its own.[4] So no TF, Pytorch or even Caffe.


The other reason why we don’t want to perform inference in the frame processor is for modularity.[5] Flexibility is important because it's often more feasible for relatively lightweight things like frame logic etc. to be done on one machine (like a Raspberry Pi) but for the heavy lifting like DNN inference to be offloaded to some external and much more powerful machine. It also lets us avoid putting put all our eggs in one OpenCV or Dlib basket.

References