Difference between revisions of "Inference Inside OpenCV or Dlib"
Jump to navigation
Jump to search
(Created page with "== Performing Inference Locally Inside OpenCV and Dlib == OpenCV’s DNN will accept TensorFlow models and pytorch models but it seems like there are problems getting it to u...") |
|||
Line 1: | Line 1: | ||
== Performing Inference Locally Inside OpenCV and Dlib == | == Not Performing Inference Locally Inside OpenCV and Dlib == | ||
OpenCV’s DNN will accept TensorFlow models and pytorch models but it seems like there are problems getting it to use CUDA and NVIDIA's GPU’s<ref>https://www.learnopencv.com/face-detection-opencv-dlib-and-deep-learning-c-python/ (Section 6. Speed Comparison)</ref><ref>https://answers.opencv.org/question/201456/how-to-run-opencv-dnn-on-nvidia-gpu/</ref> which is one of the reasons why we use an external neural network server | OpenCV’s DNN will accept TensorFlow models and pytorch models but it seems like there are problems getting it to use CUDA and NVIDIA's GPU’s<ref>https://www.learnopencv.com/face-detection-opencv-dlib-and-deep-learning-c-python/ (Section 6. Speed Comparison)</ref><ref>https://answers.opencv.org/question/201456/how-to-run-opencv-dnn-on-nvidia-gpu/</ref> which is one of the reasons why we use an external neural network server | ||
Line 9: | Line 9: | ||
# Modularity | # Modularity | ||
# The other is that often it’s more feasible for relatively lightweight things like frame logic etc. to be done on one machine (like a Raspberry Pi) but for the heavy lifting like DNN inference to be offloaded to some external and much more powerful machine | # The other is that often it’s more feasible for relatively lightweight things like frame logic etc. to be done on one machine (like a Raspberry Pi) but for the heavy lifting like DNN inference to be offloaded to some external and much more powerful machine | ||
# We don’t want to put all our eggs in the one OpenCV or Dlib basket. | # We don’t want to put all our eggs in the one OpenCV or Dlib basket. | ||
==References== | ==References== | ||
<references /> | <references /> |
Revision as of 18:24, 11 October 2019
Not Performing Inference Locally Inside OpenCV and Dlib
OpenCV’s DNN will accept TensorFlow models and pytorch models but it seems like there are problems getting it to use CUDA and NVIDIA's GPU’s[1][2] which is one of the reasons why we use an external neural network server
Dlib’s DNN can use Nvidia GPU’s internally but can’t import any models but its own.[3] So no TF, Pytorch or even Caffe.
Other reasons why we don’t want to perform inference in our image processor is, in order of importance:
- Modularity
- The other is that often it’s more feasible for relatively lightweight things like frame logic etc. to be done on one machine (like a Raspberry Pi) but for the heavy lifting like DNN inference to be offloaded to some external and much more powerful machine
- We don’t want to put all our eggs in the one OpenCV or Dlib basket.