OpenCV (Linux)
Introductionβ
OpenCV is a popular open-source computer vision library used in a wide range of systems and applications. Toradex has a blog from November, 2017 about Starting with OpenCV on i.MX 6 processors and a guest blog on CNX-Software from May, 2017 about Getting Started with OpenCV for Tegra on NVIDIA Tegra K1, CPU vs GPU Computer Vision Comparison. Those blogs provide a good overview about OpenCV.
OpenCV can be deployed in images built with OpenEmbedded or in container images for Torizon.
Torizonβ
Read the articles:
- Torizon Sample: Using OpenCV for Computer Vision.
- How to Execute Models Tuned by SageMaker Neo using DLR Runtime, Gstreamer and OpenCV on TorizonCore.
Toradex BSP V2.8 (OpenCV 3.3)β
In the transition from Toradex BSP V2.7 to V2.8, the OpenCV package migrated from version 3.1 to 3.3.
- For setting up OpenEmbedded, refer to the article Build a Reference Image with Yocto Project.
Change the "build/conf/local.conf" to add OpenCV:
IMAGE_INSTALL_append = " opencv"
Edit the OpenCV recipe (layers/meta-openembedded/meta-oe/recipes-support/opencv/opencv_3.3.bb) to tell the assembler to use thumb instructions:
CXXFLAGS += " -Wa,-mimplicit-it=thumb"
Multicore (TBB) and Gstreamer are supported by default.
Nvidia Jetpack on Apalis TK1 (OpenCV for Tegra 2.4.x)β
The Apalis TK1 computer on module uses the Tegra K1 SoC from Nvidia. This SoC has an Nvidia Kepler GPU with 192 CUDA cores, supported by OpenCV.
Nvidia provides closed-source, free to use, pre-compiled OpenCV libraries known as OpenCV for Tegra. Even though the public OpenCV can be built with CUDA support, OpenCV for Tegra benefits from additional multicore and NEON optimizations.
To use OpenCV for Tegra, you can install the Nvidia JetPack on Apalis TK1.
Build image and SDKβ
MACHINE=apalis-imx6 bitbake angstrom-lxde-image
MACHINE=apalis-imx6 bitbake -c populate_sdk angstrom-lxde-image
The output can be found here:
- For images, u-boot, uImage, rootfs, deployable tarball: build/out-glibc/deploy/images/${MACHINE}/
- For ipk packages: build/out-glibc/deploy/ipk/.../*.ipk
- Cross compiler and tools: build/out-glibc/sysroots/x86_64-linux/usr/bin/armv7a-vfp-angstrom-linux-gnueabi
- Library headers and unstripped binary libraries can be found in: build/out-glibc/sysroots/${MACHINE}/
- SDK: build/out-glibc/deploy/sdk/
Install the SDKβ
Install the SDK on your development computer (replace x86_64 with i686 if you use a 32-bit machine. SDK provides the toolchain and necessary headers and libraries required for developing the application.
./out-glibc/deploy/sdk/angstrom-glibc-x86_64-armv7at2hf-vfp-neon-v2015.12-toolchain.sh
OpenCV Exampleβ
The following is an example facedetection code using Haar Cascade classifiers, modified from one of the many examples available with OpenCV documentation.
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include "stdio.h"
using namespace std;
using namespace cv;
CascadeClassifier face_cascade;
string window_name = "Face Detection Demo";
String face_cascade_name = "/home/root/haarcascade_frontalface_alt2.xml";
const int BORDER = 8; /* order between GUI elements to the edge of the image */
template <typename T> string toString(T t)
{
ostringstream out;
out << t;
return out.str();
}
/*
* Draw text into an image. Defaults to top-left-justified text,
* but you can give negative x coords for right-justified text,
* and/or negative y coords for bottom-justified text
* Returns the bounding rect around the drawn text
*/
Rect drawString(Mat img, string text, Point coord, Scalar color,
float fontScale = 0.6f, int thickness = 1, int fontFace = FONT_HERSHEY_COMPLEX)
{
/* Get the text size & baseline */
int baseline = 0;
Size textSize = getTextSize(text, fontFace, fontScale, thickness, &baseline);
baseline += thickness;
/* Adjust the coords for left/right-justified or top/bottom-justified */
if (coord.y >= 0) {
/*
* Coordinates are for the top-left corner of the text
* from the top-left of the image, so move down by one row.
*/
coord.y += textSize.height;
} else {
/*
* Coordinates are for the bottom-left corner of the text
* from the bottom-left of the image, so come up from the bottom
*/
coord.y += img.rows - baseline + 1;
}
/* Become right-justified if desired */
if (coord.x < 0) {
coord.x += img.cols - textSize.width + 1;
}
/* Get the bounding box around the text */
Rect boundingRect = Rect(coord.x, coord.y - textSize.height, textSize.width, baseline + textSize.height);
/* Draw anti-aliased text */
putText(img, text, coord, fontFace, fontScale, color, thickness, CV_AA);
/* Let the user know how big their text is, in case they want to arrange things */
return boundingRect;
}
int main(int argc, const char** argv)
{
VideoCapture capture;
Mat frame;
std::vector<Rect> faces;
Mat frame_gray;
if (!face_cascade.load( face_cascade_name ) ) {
printf("--(!)Error loading training file: haarcascade_frontalface_alt2.xml\n");
return -1;
};
try {
capture.open("v4l2:///dev/video3");
capture.set(CV_CAP_PROP_FRAME_WIDTH, 640);
capture.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
}
catch (cv::Exception &e)
{
const char *err_msg = e.what();
cout << "Exception caught: " << err_msg << endl;
}
if ( !capture.isOpened() ) {
cout << "ERROR: Could not access the camera!" << endl;
exit(1);
}
while(true) {
capture >> frame;
if (!frame.empty()) {
cvtColor(frame, frame_gray, CV_BGR2GRAY);
equalizeHist(frame_gray, frame_gray);
face_cascade.detectMultiScale(frame_gray, faces, 1.2, 3, CV_HAAR_DO_CANNY_PRUNING, Size(80, 80));
for (size_t i = 0; i < faces.size(); i++) {
CvPoint pt1 = { faces[i].x, faces[i].y };
CvPoint pt2 = { faces[i].x + faces[i].width, faces[i].y + faces[i].height };
rectangle(frame, pt1, pt2, CV_RGB(0, 255, 0), 3, 4, 0);
Mat faceROI = frame_gray(faces[i]);
}
string stringToDisplay = "Number Of Faces: " + toString(faces.size());
drawString(frame, stringToDisplay, Point(BORDER, -BORDER - 2 - 50), CV_RGB(0, 0, 0));
drawString(frame, stringToDisplay, Point(BORDER + 1, -BORDER - 1 - 50), CV_RGB(0, 255, 0));
imshow(window_name, frame);
} else {
printf(" --(!) No captured frame");
}
int c = waitKey(1);
if ((char)c == 27) {
break;
}
}
return 0;
}
Below is the Makefile which can be used for building the above example code with make. The CC, INCLUDES and LIB_PATH in the below Makefile need to be modified to either point to the relevant path in your OpenEmbedded setup or installed SDK. Below Makefile considers the standard SDK installation paths.
SYSROOTS ?= /usr/local/oecore-x86_64/sysroots
CC = ${SYSROOTS}/x86_64-angstromsdk-linux/usr/bin/arm-angstrom-linux-gnueabi/arm-angstrom-linux-gnueabi-g++
INCLUDES = -I${SYSROOTS}/armv7at2hf-vfp-neon-angstrom-linux-gnueabi/usr/include
LIB_PATH = -L${SYSROOTS}/armv7at2hf-vfp-neon-angstrom-linux-gnueabi/lib
LIBS = -lpthread -lopencv_highgui -lopencv_core -lopencv_imgproc -lopencv_objdetect -lopencv_videoio -lm
CFLAGS = -O2 -g -Wall -mfloat-abi=hard --sysroot=${SYSROOTS}/armv7at2hf-vfp-neon-angstrom-linux-gnueabi
all:
${CC} ${CFLAGS} ${INCLUDES} ${LIB_PATH} ${LIBS} facedetect.cpp -o facedetect
clean:
rm -rf facedetect
Noteβ
A USB camera was used for testing. The /dev/videoX node needs to be specified correctly to the capture.open() call. Available video devices can be checked with v4l2-ctl.
root@apalis-imx6:~# v4l2-ctl --list-devices
[ 251.396035] ERROR: v4l2 capture: slave not found! V4L2_CID_HUE
[ 251.401943] ERROR: v4l2 capture: slave not found! V4L2_CID_HUE
[ 251.407957] ERROR: v4l2 capture: slave not found! V4L2_CID_HUE
DISP3 BG ():[ 251.415160] ERROR: v4l2 capture: slave not found! V4L2_CID_HUE
/dev/video16
/dev/video17
HD Pro Webcam C920 (usb-ci_hdrc.1-1.1.3):
/dev/video3
Failed to open /dev/video0: Resource temporarily unavailable
The haarcascade_frontalface_alt2.xml file can be found in ~oe-core/build/out-glibc/sysroots/apalis-imx6/usr/share/OpenCV/haarcascades/.
Legacyβ
This section holds instructions for older BSP versions.
Toradex BSP V2.7 (OpenCV 3.1)β
In the transition from Toradex BSP V2.6 to V2.7, the OpenCV package migrated from version 2.4.11 to 3.1.
Change the "build/conf/local.conf" to add OpenCV:
IMAGE_INSTALL_append = " opencv"
Edit the OpenCV recipe (layers/meta-openembedded/meta-oe/recipes-support/opencv/opencv_3.1.bb) to tell the assembler to use thumb instructions:
CXXFLAGS += " -Wa,-mimplicit-it=thumb"
Multicore (TBB) and Gstreamer are supported by default.
Toradex BSP V2.6 (OpenCV 2.4.x)β
Change the "build/conf/local.conf" to add OpenCV:
IMAGE_INSTALL_append = " opencv opencv-samples"
Enable multicoreβ
To enable multicore support using TBB, modify the OpenCV recipe (vim layers/meta-openembedded/meta-oe/recipes-support/opencv/opencv_2.4.bb):
PACKAGECONFIG ??= "eigen jpeg png tiff v4l libv4l tbb\
PACKAGECONFIG[tbb] = "-DWITH_TBB=ON,-DWITH_TBB=OFF,tbb,"
Enable gstreamer support in OpenCVβ
OpenCV bitbake recipe does not enable gstreamer support by default. Enable gstreamer support by adding the following line to ~oe-core/stuff/meta-toradex/recipes-support/opencv/opencv_2.4.bbappend.
EXTRA_OECMAKE += "-DWITH_GSTREAMER=ON"