Main /
Intel RealSense Facial RecognitionPrevious versions of the Intel RealSense had facial recognition capabilities, but those capabilities were https://software.intel.com/en-us/realsense-sdk-windows-eol removed and are not available for the D400 series. How Apple stomped on Intel's plans to make RealSense cameras detect emotions (PC World, March 2016)
librealsense OpenCV Wrapperlibrealsense OpenCV Wrapper (GitHub) "RealSense examples have been designed and tested with OpenCV 3.4, Working with latest OpenCV 4 requires minor code changes"
RealSense, OpenCV and Python
OpenCV and Python under Mac OS
bash-3.2$ python Python 2.7.16 (default, Apr 22 2019, 18:46:22) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.46.4)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import cv2 >>> cv2.__version__ '4.1.0' >>> quit Use quit() or Ctrl-D (i.e. EOF) to exit >>> quit(); bash-3.2$ which python bash-3.2$ ls -l /usr/local/bin/python lrwxr-xr-x 1 cxh admin 36 Apr 22 18:49 /usr/local/bin/python -> ../Cellar/python@2/2.7.16/bin/python bash-3.2$ Get the Haar files: wget https://raw.githubusercontent.com/opencv/opencv/master/data/haarcascades/haarcascade_eye.xml wget https://raw.githubusercontent.com/opencv/opencv/master/data/haarcascades/haarcascade_frontalface_default.xml Get the image: wget https://upload.wikimedia.org/wikipedia/commons/9/96/Anjali-Sachin.jpg Create faceDetector.py: import numpy as np import cv2 as cv face_cascade = cv.CascadeClassifier('haarcascade_frontalface_default.xml') eye_cascade = cv.CascadeClassifier('haarcascade_eye.xml') img = cv.imread('Anjali-Sachin.jpg') gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.3, 5) for (x,y,w,h) in faces: cv.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) roi_gray = gray[y:y+h, x:x+w] roi_color = img[y:y+h, x:x+w] eyes = eye_cascade.detectMultiScale(roi_gray) for (ex,ey,ew,eh) in eyes: cv.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2) cv.imshow('img',img) cv.waitKey(0) cv.destroyAllWindows() Run it: python faceDetector.py Below is the result: Reading RealSense Data via .bag files
"Replace pipe.start() with:" rs2::config cfg; cfg.enable_device_from_file(<filename>); pipe.start(cfg); // Load from file # Create pipeline pipeline = rs.pipeline() # Create a config object config = rs.config() # Tell config that we will use a recorded device from filem to be used by the pipeline through playback. rs.config.enable_device_from_file(config, args.input) # Configure the pipeline to stream the depth stream config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30) # Start streaming from file pipeline.start(config) If we copy an example by going to https://github.com/IntelRealSense/librealsense/blob/master/doc/sample-data.md and then clicking on the "Outdoors scene captured with D415 pre-production sample (Depth from Stereo)" example and downloading the the 403Mb (!) file. Unfortunately, Other file names:
Creating a mp4 file from a bag file
rs-convert -i /tmp/faces.bag -p /tmp/png/faces ffmpeg -pattern_type glob -i 'faces_Color*.png' -pix_fmt yuv420p -c:v libx264 test.mp4 RealSense, OpenCV and Python under MacOSUnder MacOS, build librealsense with: cmake .. -DBUILD_EXAMPLES=true -DBUILD_WITH_OPENMP=false -DHWM_OVER_XU=false -DBUILD_PYTHON_BINDINGS=true -DBUILD_PYTHON_DOCS=true -DPYTHON_EXECUTABLE=/opt/local/bin/python3.7 -G "Unix Makefiles"
What worked for me was bash-3.2$ cp build/wrappers/python/*.so /usr/local/lib bash-3.2$ cp build/*.dylib /usr/local/lib bash-3.2$ export PYTHONPATH=/usr/local/lib bash-3.2$ python3.7 Python 3.7.5 (default, Oct 19 2019, 01:20:12) [Clang 10.0.1 (clang-1001.0.46.4)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import pyrealsense2 >>> # First import library import pyrealsense2 as rs # Import Numpy for easy array manipulation import numpy as np # Import OpenCV for easy image rendering import cv2 as cv # Import os.path for file path manipulation import os.path face_cascade = cv.CascadeClassifier('haarcascade_frontalface_default.xml') eye_cascade = cv.CascadeClassifier('haarcascade_eye.xml') # img = cv.imread('Anjali-Sachin.jpg') # Create pipeline pipeline = rs.pipeline() # Create a config object config = rs.config() # Tell config that we will use a recorded device from filem to be used by the pipeline through playback. # FIXME: Unfortunately, @@outdoors.bag@@ does not contain any face data'. So, we should create a bag file with some faces. rs.config.enable_device_from_file(config, 'outdoors.bag') # Configure the pipeline to stream the depth stream # config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30) # Start streaming from file pipeline.start(config) window_name = "Display Image"; cv.namedWindow(window_name, cv.WINDOW_AUTOSIZE); # Create colorizer object colorizer = rs.colorizer(); try: while True: # Get frameset of depth frames = pipeline.wait_for_frames() # Get depth frame # frame = frames.get_depth_frame() # Colorize depth frame to jet colormap # color_frame = colorizer.colorize(frame) color_frame = frames.get_color_frame() # Convert depth_frame to numpy array to render image in opencv img = np.asanyarray(color_frame.get_data()) gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.3, 5) for (x,y,w,h) in faces: print(x, y, w, h) cv.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) roi_gray = gray[y:y+h, x:x+w] roi_color = img[y:y+h, x:x+w] eyes = eye_cascade.detectMultiScale(roi_gray) for (ex,ey,ew,eh) in eyes: cv.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2) # Query frame size (width and height) # const int w = depth.as<rs2::video_frame>().get_width(); # const int h = depth.as<rs2::video_frame>().get_height(); # Create OpenCV matrix of size (w,h) from the colorized depth data # Mat image(Size(w, h), CV_8UC3, (void*)depth.get_data(), Mat::AUTO_STEP); # Update the window with new data cv.imshow(window_name, img); key = cv.waitKey(1) # if pressed escape exit program if key == 27: cv.destroyAllWindows() break # cv.imshow('img',img) # cv.waitKey(0) # cv.destroyAllWindows() finally: pipeline.stop() ProblemsImportError: No module named pyrealsense2bash-3.2$ python faceDetectionRealSense.py Traceback (most recent call last): File "faceDetectionRealSense.py", line 2, in <module> import pyrealsense2 as rs ImportError: No module named pyrealsense2 Solution: Copy the shared library, which unfortunately is .so instead of .dylibv cp /usr/local/lib/pyrealsense2.2.23.cpython-37m-darwin.so pyrealsense2.so Symbol not found: _PyInstanceMethod_Typeealmac23:opencv root# python faceDetectionRealSense.py Traceback (most recent call last): File "faceDetectionRealSense.py", line 2, in <module> import pyrealsense2 as rs ImportError: dlopen(/Users/cxh/src/opencv/pyrealsense2.so, 2): Symbol not found: _PyInstanceMethod_Type Referenced from: /Users/cxh/src/opencv/pyrealsense2.so Expected in: flat namespace in /Users/cxh/src/opepythonncv/pyrealsense2.so PyBind FAQ says: “ImportError: dynamic module does not define init function”
You are likely using an incompatible version of Python (for instance, the extension library was compiled against Python 2, while the interpreter is running on top of some version of Python 3, or vice versa).
“Symbol not found: __Py_ZeroStruct / _PyInstanceMethod_Type”
See the first answer.
Solution: Use Python3: /usr/local/Cellar/python/3.7.3/bin/python3.7 faceDetectionRealSense.py Couldn't resolve requestsThere is no local camera, so I've been trying to read from The RealSense OpenCV C++ example ( //Add desired streams to configuration cfg.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_BGR8, 30); + cfg.enable_device_from_file("/Users/cxh/src/opencv-bak/outdoors.bag"); //Instruct pipeline to start streaming with the requested configuration pipe.start(cfg); and compile with g++ -I/opt/local/include -std=c++11 BGR_sample.cpp -L/opt/local/lib -lrealsense2 -lopencv_core -lopencv_highgui -o BGR Then I get an error: bash-3.2$ ./BGR libc++abi.dylib: terminating with uncaught exception of type rs2::error: Couldn't resolve requests Abort trap: 6 The same happens with my three files: /Users/cxh/src/librealsense/unit-tests/resources/single_depth_color_640x480.bag /Users/cxh/src/opencv-bak/D435i_Depth_and_IMU_Stands_still.bag /Users/cxh/src/opencv-bak/outdoors.bag However, the https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/read_bag_example.py python3.7 ./read_bag_example.py -i /Users/cxh/src/opencv-bak/outdoors.bag and, However, it fails with the other two .bag files: bash-3.2$ python3.7 ./read_bag_example.py -i /Users/cxh/src/librealsense/unit-tests/resources/single_depth_color_640x480.bag python3.7 ./read_bag_example.py -i /Users/cxh/src/librealsense/unit-tests/resources/single_depth_color_640x480.bag Traceback (most recent call last): File "./read_bag_example.py", line 46, in <module> pipeline.start(config) RuntimeError: Couldn't resolve requests bash-3.2$ python3.7 ./read_bag_example.py -i /Users/cxh/src/opencv-bak/D435i_Depth_and_IMU_Stands_still.bag python3.7 ./read_bag_example.py -i /Users/cxh/src/opencv-bak/D435i_Depth_and_IMU_Stands_still.bag Traceback (most recent call last): File "./read_bag_example.py", line 46, in <module> pipeline.start(config) RuntimeError: Couldn't resolve requests bash-3.2$ Solution? Looking at librealsense docs for rs2::config Class Reference:
So, editing cfg.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_ANY, 30); seems to work with
It seems like the problem has to do with combining enabling streams for depth and color and reading from a bag file. Interestingly, these worked: config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30) config.enable_stream(rs.stream.color, 640, 480, rs.format.any, 30) Also, using this for color works: config.enable_stream(rs.stream.color, 640, 480, rs.format.rgb8, 30) TypeError: Expected Ptr<cv::UMat> for argument '%s'Now, if I edit config.enable_stream(rs.stream.color, 640, 480, rs.format.any, 30) ... print(color_frame) print(type(color_frame)) width = color_frame.get_width() height = color_frame.get_height() stride = color_frame.get_stride_in_bytes() bits_per_pixel = color_frame.get_bits_per_pixel() bytes_per_pixel = color_frame.get_bytes_per_pixel() print("w ", width, " h ", height, " stride: ", stride, " bits/pixel: ", bits_per_pixel, " bytes_per_pixel: ", bytes_per_pixel)
I get: <pyrealsense2.video_frame object at 0x10f35c270> <class 'pyrealsense2.video_frame'> w 640 h 480 stride: 1920 bits/pixel: 24 bytes_per_pixel: 3 Traceback (most recent call last): File "realsense_face_detect.py", line 68, in <module> gray_frame = cv2.cvtColor(color_frame, cv2.COLOR_BGR2GRAY) TypeError: Expected Ptr<cv::UMat> for argument '%s' Solution: I was using video frames instead of arrays. ## License: Apache 2.0. See LICENSE. ## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved. ## Based on https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_viewer_example.py ############################################### ## Open CV and Numpy integration ## ############################################### import pyrealsense2 as rs import numpy as np import cv2 import sys, traceback cascPath = "../webcam_face_detect/haarcascade_frontalface_default.xml" faceCascade = cv2.CascadeClassifier(cascPath) # Configure depth and color streams pipeline = rs.pipeline() config = rs.config() config.enable_device_from_file("/Users/cxh/src/bags/outdoors.bag"); config.enable_stream(rs.stream.color, 640, 480, rs.format.any, 30) pipeline.start(config); try: while True: # Wait for a coherent pair of frames: depth and color frames = pipeline.wait_for_frames() color_frame = frames.get_color_frame() if not color_frame: continue # print(color_frame) # print(type(color_frame)) # width = color_frame.get_width() # height = color_frame.get_height() # stride = color_frame.get_stride_in_bytes() # bits_per_pixel = color_frame.get_bits_per_pixel() # bytes_per_pixel = color_frame.get_bytes_per_pixel() # print("w ", width, " h ", height, " stride: ", stride, " bits/pixel: ", bits_per_pixel, " bytes_per_pixel: ", bytes_per_pixel) color_image = np.asanyarray(color_frame.get_data()) gray_frame = cv2.cvtColor(color_image, cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale( gray_frame, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30), flags = cv2.CASCADE_SCALE_IMAGE ) # Convert images to numpy arrays color_image = np.asanyarray(color_frame.get_data()) # Draw a rectangle around the faces for (x, y, w, h) in faces: print("face: ", x, y, w, h) cv2.rectangle(color_image, (x, y), (x+w, y+h), (0, 255, 0), 2) # Show images cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE) cv2.imshow('RealSense', color_image) cv2.waitKey(1) finally: # Stop streaming pipeline.stop() OpenCV, Python and Intel RealSense under WindowsUnder Windows, the Intel RealSense pyrealsense2 package seems to require Python 2 and will not work with Python 3, see Python RealSense However, Support of pyrealsense2 package on Python 3.7 (Windows) states
So, Intel RealSense pyrealsense2 will work with both Python2.7 and Python 3.x In addition, LabJackPython does install under Python3, despite Why does labjackpython package still enforce python=2.7. So, Python 3.x can work. The 64-bit version of Python Installing the Intel RealSense Python pyrealsense2 module requires the These instructions are for the 32-bit version of Python 2.7, which will not work with tensorflow
Robot Operation System (ROS)Using Robot Operating System (ROS) could be interesting
|