1 Introduction

Machine learning and AI, the most popular words in every industry. Machine learning is a subset of Artificial Intelligence (AI), whose purpose is to learn from the data passed in, in order to improve the application without explicit user input. One of the ways data gets passed into the system, is in the form of an image. Computer vision is used in order to model and replicate human vision, with the help of computer software and hardware.

OpenCV is an open source cross platform library, originally created by Intel, to facillitate computer vision and machine learning. OpenCV libraries are mainly used for object detection and recognition. Its features support many applications in robotics, medicine, security, industrial automation and the automotive industry.

The latest releases of VxWorks-7 (SR0540 onwards) have Open Source Computer Vison (OpenCV) support. VxWorks brings the safety critical side into computer vision, which is a big plus in all the industries mentioned previously.

This blog focuses on edge detection, one of the many tools we can use to process an image. The blog will go into details, on how an OpenCv tool such as edge detection can be build and run with VxWorks.

2 Prerequisites

  • VxWorks (SR540+)
  • USB memory stick with grub configured
  • X86 64-bit PC (target) with an in-built or external USB camera

3 Build VxWorks image

First, we need to build the VxWorks Source code (a VSB), in order to extract and patch the OpenCV libraries form GitHub.

We built the VSB with itl_generic_2_0_0_0 BSP, CORE CPU and 64 Bit address mode.

Add the following components in the VSB kernel configuration :


When the VSB is built, navigate to the path below to find the readme file with the instructions on building the VIP and the RTP file.


Load thge RTP with one of the VxWorks examples already available or the new real-time edge detection code below.

Once the RTP file is built, add the .vxe file into a ROMFS project which is created inside the VIP.

4 Loading and executing the VxWorks image

We used a Dell laptop with an in-built camera to run the application.

Boot the application from the USB containing the latest VxWorks image.

4.1 Telnet Execution

Open putty and telnet into the target device. Once connected, run the executable file in ROMFS.

-> devs
drv name
  0 /null
  1 /tyCo/0
  2 /pcConsole/0
  8 /romfs
  9 /input/event
 11 host:
  4 /bd0:1
 12 /uvc/0
  6 stdio_pty_0xffff8000006d97a0.S
  7 stdio_pty_0xffff8000006d97a0.M
value = 35 = 0x23 = '#'

-> rtpSp "/romfs/RTP_opencv_edge_detect.vxe"

5 Edge Detection Code

#include "opencv2/core/utility.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include <stdio.h>

#include <opencv2/videoio.hpp>
#include "fboutput/cvVxDisplay.hpp"

using namespace cv;
using namespace std;

Mat blurImage, edge1, edge2, cEdge;
int edgeThresh = 50;
int edgeThreshScharr = 50;

Mat GrayFrame;

static void help()
    printf("\nThis sample demonstrates Canny edge detection\n"
           "    /.edge [image_name -- Default is ../data/fruits.jpg]\n\n");

int main( int argc, const char** argv )
cv::CommandLineParser parser(argc, argv, "{@input||}{help h||}");
string input = parser.get<string>("@input");

if (parser.has("help"))
    return 0;


    if (argv[1] != NULL )
        edgeThreshScharr = stoi (argv[1]);

        if (edgeThreshScharr == 0)
            edgeThreshScharr = 50;

    VideoCapture VideoStream(0);
    VideoWriter out_vid;
    Mat ReferenceFrame;
    Mat frame1;

    if (!VideoStream.isOpened())
        printf("Error: Cannot open video stream from camera\n");
        return 1;
    VideoStream.set(CAP_PROP_FRAME_WIDTH, 640);
    VideoStream.set(CAP_PROP_FRAME_HEIGHT, 480);
    VideoStream.set(CAP_PROP_FPS, 30);
    out_vid.open("edge.avi", VideoWriter::fourcc('M','J','P','G'), 10,

        VideoStream >> frame1;
        cvtColor(frame1, ReferenceFrame, COLOR_YUV2BGR_YUY2);
        cEdge.create(ReferenceFrame.size(), ReferenceFrame.type());
        cvtColor(ReferenceFrame, GrayFrame, COLOR_RGB2GRAY); /*convert Image to
                                                              * Grayscale */

        blur(GrayFrame, blurImage, Size(3,3));

#if 0 /* This code would output a low quality edge detection*/

        /* Run edge detector on grayscale */
        Canny(blurImage, edge1, edgeThresh, edgeThresh*3, 3);
        cEdge = Scalar::all(0); /* Fill cEdge with Zeros, all black */
        ReferenceFrame.copyTo(cEdge, edge1);

        Mat dx,dy;
        Canny( dx,dy, edge2, edgeThreshScharr, edgeThreshScharr*3 );
        cEdge = Scalar::all(0); /* Fill cEdge with Zeros, all black */
        ReferenceFrame.copyTo(cEdge, edge2);

        Mat rgb;
        cvtColor(cEdge, rgb, COLOR_BGR2BGRA);
        if (out_vid.isOpened())

        } while (1);

    return 0;