Frequently Asked Questions
[top]

I compiled dlib's Python interface with CUDA enabled, why isn't it using CUDA?

Either you are using a part of dlib that just doesn't use CUDA, of which there are many parts, or you are mistaken about compiling dlib with CUDA enabled. In particular, many users report that "dlib isn't using CUDA even though I definitely compiled it with CUDA" and in every case either they are not using part of dlib that uses CUDA or they have installed multiple copies of dlib on their computer, some with CUDA disabled, and they are using a non-CUDA build.

You can check if dlib is compiled to use CUDA by looking at the dlib.DLIB_USE_CUDA boolean. If dlib.DLIB_USE_CUDA is false then you didn't compile it with CUDA enabled, but if it's true then dlib is using all the CUDA it's going to use.

[top]

Why is dlib.image_window missing from the Python module?

If you are getting the error module 'dlib' has no attribute 'image_window' it is because you compiled dlib without GUI support (or you are using a copy of dlib someone else compiled and they built it without GUI support). So note that it is possible to compile dlib with any GUI tools. Some people want to do this because they run dlib on systems that don't have any kind of GUI framework installed.

But since you are reading this you obviously want to use the GUI tools. The solution is to get a copy of dlib and python setup.py install it yourself. It's easy. Note that you might get a warning in the build output about X11 not being installed. Maybe that's why you are getting this error in the first place. READ THAT MESSAGE AND FOLLOW ITS INSTRUCTIONS since it tells you what to do to fix this.

[top]

Why is some function missing from the dlib Python module?

If you are missing dlib.image_window then read the FAQ about that. If you are missing any other function then it's because you are using an old version of dlib that just doesn't have that function. You need to install a newer version of dlib. Please don't post questions about this on any of dlib's forums or email me about it. Just install a new dlib. The only way to use features in a new version of dlib is to get the new version of dlib. Often people think they have the new version of dlib installed when really they have some old version installed. You can see what version of dlib you are using by checking dlib.__version__.
[top]

How can I cite dlib?

If you use dlib in your research then please use the following citation:

Davis E. King. Dlib-ml: A Machine Learning Toolkit. Journal of Machine Learning Research 10, pp. 1755-1758, 2009

@Article{dlib09,
  author = {Davis E. King},
  title = {Dlib-ml: A Machine Learning Toolkit},
  journal = {Journal of Machine Learning Research},
  year = {2009},
  volume = {10},
  pages = {1755-1758},
}
         
[top]

How can I use dlib in Visual Studio?

First, note that you need a version of Visual Studio with decent C++11 support. This means you need Visual Studio 2015 or newer.

There are instructions on the How to Compile page. If you do not understand the instructions in the "Compiling on Windows Using Visual Studio" section or are getting errors then follow the instructions in the "Compiling on Any Operating System Using CMake" section. In particular, install CMake and then type these exact commands from within the root of the dlib distribution:
cd examples
mkdir build
cd build
del /F /S /Q *
cmake ..
cmake --build . --config Release
That should compile the dlib examples in visual studio. The output executables will appear in the Release folder. The del /F /S /Q * command is to make sure you clear out any extraneous files you might have placed in the build folder and is not necessary if build begins empty.
[top]

How do I install/compile dlib?

Follow the official instructions. They tell you exactly what to type to use dlib.
[top]

How do I set the size of a matrix at runtime?

Long answer, read the matrix example program.

Short answer, here are some examples:
matrix<double> mat;
mat.set_size(4,5);

matrix<double,0,1> column_vect;
column_vect.set_size(6);

matrix<double,0,1> column_vect2(6);  // give size to constructor

matrix<double,1> row_vect;
row_vect.set_size(5);
[top]

How does dlib interface with other libraries/tools?

There should never be anything in dlib that prevents you from using or interacting with other libraries. Moreover, there are some additional tools in dlib to make some interactions easier:
[top]

It doesn't work?

Do not post a question like "I'm using dlib, and it doesn't work?" or "I'm using the object detector and it doesn't work, what do I do?". If this is all you say then I have no idea what is wrong. 99% of the time it's some kind of user error. 1% of the time it's some problem in dlib. But again, without more information it's impossible to know. So please don't post questions like this.

If you think you found some kind of bug or problem in dlib then feel free to submit a dlib issue on github. But include the version of dlib you are using, what you are trying, what happened, what you expected to have happened instead, etc.

On the other hand, if you haven't found a bug or problem in dlib, but instead are looking for machine learning/computer vision/programming help then post your question to stack overflow with the dlib tag.

[top]

Where is the documentation for <object/function>?

Every class and function in dlib is documented in detail. If you can't find something then check the index.

Also, the bulk of the documentation can be found by clicking the More Details... buttons. So you should click on the "more details" buttons and read the documentation.

A lot of people post questions like "There is no documentation for some_random_function(), how do I use it?", when in reality the function is documented in detail. Between the index, site search, and main website which breaks down functions/classes into topical categories there is no excuse for not being able to find the documentation for a function or class. This is especially true if you know its name because you can jump right to it using the index or even a simple google search. So if you are posting a question like "I don't understand how something works" and obviously haven't read the documentation then you are just going to get referred to this FAQ. So please read the documentation before asking questions.

[top]

Why do I get USER_ERROR__inconsistent_build_configuration__see_dlib_faq_1?

You are getting this error because you either forgot to link to dlib, or are not compiling all the C++ code in your program with consistent settings. The latter is wrong because it is a violation of C++'s One Definition Rule. In this case, you are compiling some translation units with dlib's assert macros enabled and others with them disabled.

For reference, the code that generates this error is: dlib/test_for_odr_violations.h and dlib/test_for_odr_violations.cpp.

[top]

Why do I get USER_ERROR__inconsistent_build_configuration__see_dlib_faq_2?

You are getting this error because you are not compiling all the C++ code in your program with consistent settings. This is a violation of C++'s One Definition Rule. In this case, you compiled a standalone copy of dlib with CMake and instead of using make install or cmake --build . --target install to copy the resulting build files somewhere you went and cherry picked files manually and messed it up. In particular, CMake compiled dlib with a bunch of settings recorded in the CMake generated config.h file but you instead are now trying to build more dlib related code with the dlib/config.h from source control.

For reference, the code that generates this error is: dlib/test_for_odr_violations.h and dlib/test_for_odr_violations.cpp.

Finally, most users who get this error are using Visual Studio. You probably compiled dlib and then went into Visual Studio's output folder, grabbed the .lib file, and then tried to create a project using that .lib file and dlib's .h files from github. THIS IS WRONG, DO NOT DO THIS. Instead, read the instructions for using dlib and follow them. I promise they are much simpler than any process that involves manually copying files around in the file explorer.

[top]

Why is dlib slow?

Dlib isn't slow. I get this question many times a week and 95% of the time it's from someone using Visual Studio who has compiled their program in Debug mode rather than the optimized Release mode. So if you are using Visual Studio then realize that Visual Studio has these two modes. The default is Debug. The mode is selectable via a drop down:

Debug mode disables compiler optimizations. So the program will be very slow if you run it in Debug mode. So click the drop down,

and select Release.

Then when you compile the program it will appear in a folder named Release rather than in a folder named Debug.

Finally, you can enable either SSE4 or AVX instruction use. These will make certain operations much faster (e.g. face detection). You do this using CMake's cmake-gui tool. For example, if you execute these commands you will get the cmake-gui screen:
cd examples
mkdir build
cd build
cmake .. 
cmake-gui .
Which looks like this:

Where you can select SSE4 or AVX instruction use. Then you click configure and then generate. After that when you build your visual studio project some things will be faster. Finally, note that AVX is a little bit faster than SSE4 but if your computer is fairly old it might not support it. In that case, either buy a new computer or use SSE4 instructions.
[top]

Why isn't serialization working?

Here are the possibilities:
[top]

Can you give advice on feature generation/kernel selection?

Picking the right kernel all comes down to understanding your data, and obviously this is highly dependent on your problem.

One thing that's sometimes useful is to plot each feature against the target value. You can get an idea of what your overall feature space looks like and maybe tell if a linear kernel is the right solution. But this still hides important information from you. For example, imagine you have two diagonal lines which are very close together and are both the same length. Suppose one line is of the +1 class and the other is the -1 class. Each feature (the x or y coordinate values) by itself tells you almost nothing about which class a point belongs to but together they tell you everything you need to know.

On the other hand, if you know something about the data you are working with then you can also try and generate your own features. So for example, if your data is a bunch of images and you know that one of your classes contains a lot of lines then you can make a feature that attempts to measure the number of lines in an image using a hough transform or sobel edge filter or whatever. Generally, try and think up features which should be highly correlated with your target value. A good way to do this is to try and actually hand code N solutions to the problem using whatever you know about your data or domain. If you do a good job then you will have N really great features and a linear or rbf kernel will probably do very well when using them.

Or you can just try a whole bunch of kernels, kernel parameters, and training algorithm options while using cross validation. I.e. when in doubt, use brute force :) There is an example of that kind of thing in the model selection example program.

[top]

How can I define a custom kernel?

See the Using Custom Kernels example program.
[top]

Why does my decision_function always give the same output?

This happens when you use the radial_basis_kernel and you set the gamma value to something highly inappropriate. To understand what's happening lets imagine your data has just one feature and its value ranges from 0 to 7. Then what you want is a gamma value that gives nice Gaussian bumps like the one in this graph:

However, if you make gamma really huge you will get this (it's zero everywhere except for one place):

Or if you make gamma really small then it will be 1.0 everywhere:

So you need to pick the gamma value so that it is scaled reasonably to your data. A good rule of thumb (i.e. not the optimal gamma, just a heuristic guess) is the following:

const double gamma = 1.0/compute_mean_squared_distance(randomly_subsample(samples, 2000));
[top]

Why is cross_validate_trainer_threaded() crashing?

This function makes a copy of your training data for each thread. So you are probably running out of memory. To avoid this, use the randomly_subsample function to reduce the amount of data you are using or use fewer threads.

For example, you could reduce the amount of data by saying this:

// reduce to only 1000 samples
cross_validate_trainer_threaded(trainer, 
                                randomly_subsample(samples, 1000), 
                                randomly_subsample(labels,  1000), 
                                4,   // num folds
                                4);  // num threads

[top]

Why is RVM training is really slow?

The optimization algorithm is somewhat unpredictable. Sometimes it is fast and sometimes it is slow. What usually makes it really slow is if you use a radial basis kernel and you set the gamma parameter to something too large. This causes the algorithm to start using a whole lot of relevance vectors (i.e. basis vectors) which then makes it slow. The algorithm is only fast as long as the number of relevance vectors remains small but it is hard to know beforehand if that will be the case.

You should try kernel ridge regression instead since it also doesn't take any parameters but is always very fast.

[top]

Why doesn't the object detector I trained work?

There are three general mistakes people make when trying to train an object detector with dlib.
[top]

Why can't I change the network architecture at runtime?

A major design goal of this API is to let users create new loss layers, computational layers, and solvers without needing to understand or even look at the dlib internals. A lot of the API decisions are based on what makes the interface a user needs to implement to create new layers as simple as possible. In particular, designing the API in this compile-time static way makes it simple for these use cases.

Here is an example of one problem it addresses. Since dlib exposes the entire network architecture to the C++ type system we can get automatic serialization of networks. Without this, we would have to resort to the kind of hacky global layer registry used in other tools that compose networks entirely at runtime.

Another nice feature is that we get to use C++11 alias template statements to create network sub-blocks, which we can then use to easily define very large networks. There are examples of this in this example program. It should also be pointed out that it takes days or even weeks to train one network. So it isn't as if you will be writing a program that loops over large numbers of networks and trains them all. This makes the time needed to recompile a program to change the network irrelevant compared to the entire training time. Moreover, there are plenty of compile time constructs in C++ you can use to enumerate network architectures (e.g. loop over filter widths) if you really wanted to do so.

All that said, if you think you found a compelling use case that isn't supported by the current API feel free to post a github issue.

[top]

Why can't I use the DNN module with Visual Studio?

You can, but you need to use Visual Studio 2015 Update 3 or newer since prior versions had bad C++11 support. To make this as confusing as possible, Microsoft has released multiple different versions of "Visual Studio 2015 Update 3". As of October 2016, the version available from the Microsoft web page has good enough C++11 support to compile the DNN tools in dlib. So make sure you have a version no older than October 2016.

To make this even more complicated, Visual Studio 2017 had regressions in its C++11 support. So all versions of Visual Studio 2017 prior to December 2017 would just hang if you tried to compile the DNN examples. Happily, the newest versions of Visual Studio 2017 appear to have good C++11 support and will compile the DNN codes without any issue. So make sure your Visual Studio is fully updated.

Finally, it should be noted that you should give the -T host=x64 cmake option when generating a Visual Studio project. If you don't do this then you will get the default Visual Studio toolchain, which runs the compiler in 32bit mode, restricting it to 2GB of RAM, leading to compiler crashes due to it running out of RAM in some cases. This isn't the 1990s anymore, so you should probably run your compiler in 64bit mode so it can use your computer's RAM. Giving -T host=x64 will let Visual Studio use as much RAM as it needs.