MYNT® EYE D SDK

PRODUCT

Product Description

The MYNT Depth utilizes the camera and the motion sensor to provide visually accurate SLAM results with higher precision, lower cost, simpler layout, along with the ability to achieve face and object recognition. The concept of combining binocular and IMU is the leading-edge technology in the current SLAM industry.The depth version of the product has a built-in depth calculation chip that can output depth images without the host computer. At the same time, the product is equipped with leading hardware solutions such as IR active light, IMU six-axis, hardware-level frame synchronization, global shutter, etc., up to 720p/60fps (120°FOV version) of synchronous image information, the recognition distance can reach 15m (50 °FOV version), accuracy up to millimeters (50° FOV version).

Using camera techniques such as frame synchronization, automatic exposure, and white balance control, the MYNT EYE Depth can produce synchronized image sources with high-precision, which decreases the difficulty of algorithms development, thus increasing efficiency. The Depth comes with six-axis sensor(IMU) and an infrared active light detector (IR). Among them, the six-axis sensor(IMU) can provide complementarity and correction of data from the visual positioning algorithms, and is suitable for visual inertial odometry(VIO) algorithms research to help improve the positioning accuracy. The infrared active light detector (IR) can help solve the problem of identification of objects such as indoor white walls and non-textured objects, as well as enhance the accuracy of image source recognition.The Binocular+IMU scheme provides accurate six-axis complementary data for VSLAM applications and is more accurate and robust than other single solutions. In addition, MYNT EYE Depth also provides a rich SDK interface and VSLAM open source project support, which can help customers quickly integrate solutions, accelerate product development process, and achieve rapid productization and implementation.

As a hardware product for in-depth research and development of stereo vision computing applications, MYNT EYE Depth can be widely used in the field of visual positioning navigation (vSLAM), including visual real-time positioning navigation system of driverless vehicle and robots, visual positioning system of UAV, obstacle avoidance navigation system for driverless Vehicle, Augmented Reality (AR), Virtual Reality (VR), etc. At the same time, it can be used in field of Visual recognition, including Stereoscopic face recognition, three-dimensional object recognition, space motion tracking, three-dimensional gestures and somatosensory recognition. And of course, you can use it for measurement which includes assisted driving system (ADAS), binocular volume calculation, industrial visual screening, etc.

In order to ensure the quality of the output data of the camera products, we have calibrated the binocular and IMU. The product has passed various hardware stability tests, such as high temperature and humidity continuous work and operation, low-temperature dynamic aging, high-temperature operation, low-temperature storage, whole-machine thermal shock, sinusoidal vibration and random vibration tests to ensure the stability and reliability of the product. In addition to the research and development of products and technologies, it can also be directly applied to mass production, accelerating the process from R&D to productization.

Product Surface

Shell(mm)

PCBA board(mm)

165x31.5x29.6

149x24

_images/mynteye_surface.png

A. Camera:please pay attention to protect the camera sensor lenses, to avoid imaging quality degradation.

B. Infrared structured-light transmitter and outlet: the infrared structured-light can effectively solve the problem associated with the visual positioning calculations of white wall non-textured object(For non-IR version, the outlet is reserved but there is no internal structured-light emitter).

C. USB Micro-B interface and set screw holes: during usage, plug in the USB Micro-B cable and secure it by fastening the set screws to avoid damage to the interface and to ensure stability in connection.

D. ¼ inch standardized set screw hole: for fixing the stereo camera to tripods or other devices.

Product Specification

D1000-120/Color Specification

Product parameters

Model

D1000-IR-120/Color

Size

165x31.5x30.12mm

Frame Rate

Up to 60FPS

Resolution

2560x720;1280x480

Depth Resolution

On chip 1280x720 640x480

Pixel Size

3.75x3.75μm

Baseline

120.0mm

Visual Angle

D:121° H:105° V:58°

Focal Length

2.45mm

IR Support

Yes

IR detectable range

3m

Color Mode

Color

Working Distance

0.32-7m

Scanning Mode

Global Shutter

Power

1.9~3.5W@5V DC from USB

Synchronization Precision

<1ms (up to 0.01ms)

IMU Frequency

200Hz

Output data format

YUYV/MJPG

Data transfer Interface

USB2.0/3.0

Weight

184g

UVC MODE

Yes

Software

Support system

Windows 10、Ubuntu 16.04/18.04、ROS kinetic/melodic、Android 5.x ~ Android 8.x

SDK

http://www.myntai.com/dev/mynteye_depth

Support

ORB_SLAM2、OKVIS、Vins-Mono、Vins-Fusion、VIORB

Environment

Operating Temperature

-10°C~55°C

Storage Temperature

-15°C~70°C

Humidity

10% to 80% non-condensing

Package

Content

MYNT EYE x1 USB Micro-B Cable x1

Warranty

Warranty

12 Months Limited Manufacturer’s Warranty

Precision

Depth accuracy

The error doesn’t exceed 2%

D1000-50/Color Specification

Product parameters

Model

D1000-50/Color

Size

165x31.5x29.85mm

Frame Rate

Up to 60FPS

Resolution

2560x720;1280x480

Depth Resolution

On chip 1280x720 640x480

Pixel Size

3.75x3.75μm

Baseline

120.0mm

Visual Angle

D:70° H:64° V:38°

Focal Length

3.9mm

IR Support

NO

IR detectable range

-

Color Mode

Color

Working Distance

0.49-10m

Scanning Mode

Global Shutter

Power

1.8W@5V DC from USB

Synchronization Precision

<1ms (up to 0.01ms)

IMU Frequency

200Hz

Output data format

YUYV/MJPG

Data transfer Interface

USB2.0/3.0

Weight

152g

UVC MODE

Yes

Software

Support system

Windows 10、Ubuntu 16.04/18.04、ROS kinetic/melodic、Android 5.x ~ Android 8.x

SDK

http://www.myntai.com/dev/mynteye_depth

Support

ORB_SLAM2、OKVIS、Vins-Mono、Vins-Fusion、VIORB

Environment

Operating Temperature

-10°C~60°C

Storage Temperature

-20°C~70°C

Humidity

10% to 80% non-condensing

Package

Content

MYNT EYE x1 USB Micro-B Cable x1

Warranty

Warranty

12 Months Limited Manufacturer’s Warranty

Precision

Depth accuracy

The error doesn’t exceed 2%

D1200 Specification

Product parameters

Model

D1200

Size

75.5x34.5x12.9mm

Frame Rate

Up to 30fps

Resolution

2560*720;1280*480

Depth Resolution

1280*720; 640*480

Pixel Size

3.0*3.0μm

Baseline

40.0mm

Visual Angle

D:66° H:59° V:35°

Focal Length

3.3mm

IR Support

YES

IR detectable range

2m

Color Mode

Color

Working Distance

0.2-3m

Scanning Mode

Rolling Shutter

Power

0.75-2.5W@5V DC from USB

Output data format

YUYV/MJPG

Data transfer Interface

Type-C/Micro USB2.0

Weight

44g

UVC MODE

Yes

Software

Support system

Android 5.x ~ Android 8.x

Environment

Operating Temperature

-10°C~55°C

Storage Temperature

-15°C~70°C

Humidity

10% to 80% non-condensing

Package

Content

MYNT EYE x1 USB Cable

Warranty

Warranty

12 Months Limited Manufacturer’s Warranty

Precision

Depth accuracy

The error doesn’t exceed 1%

Support Resolutions

mode

interface

color resolution

color fps

depth resolution

depth fps

L’+D

USB3.0

1280x720

60/30/20/10

1280x720

60/30/20/10

L’+D

USB3.0

640x480

60/30

640x480

60/30

L’+R’+D

USB3.0

2560x720

30

1280x720

30

L’+R’+D

USB3.0

1280x480

60/30

640x480

60/30

L+D

USB3.0

1280x720

60/30/20/10

1280x720

60/30/20/10

L+D

USB3.0

640x480

60/30

640x480

60/30

L+R+D

USB3.0

2560x720

30

1280x720

30

L+R+D

USB3.0

1280x480

60/30

640x480

60/30

L+R

USB3.0

2560x720

30

not open

null

L’+R’

USB3.0

2560x720

30

not open

null

D

USB3.0

not open

null

1280x720

60/30

D

USB3.0

not open

null

640x480

60/30

L+R

USB2.0

2560x720

5

not open

null

L’+R’

USB2.0

2560x720

5

not open

null

L+R

USB2.0

1280x480

15

not open

null

L’+R’

USB2.0

1280x480

15

not open

null

L’+D

USB2.0

1280x720

5

640x720

5

L’+D

USB2.0

640x480

15

320x480

15

L+D

USB2.0

1280x720

5

640x720

5

L+D

USB2.0

640x480

15

320x480

15

L’

USB2.0

1280x720

5

not open

null

L

USB2.0

1280x720

5

not open

null

D

USB2.0

not open

null

640x720

5

D

USB2.0

not open

null

320x480

15

L+R

USB2.0/MJPG

2560x720

5

not open

null

L+R

USB2.0/MJPG

1280x480

15

not open

null

L

USB2.0/MJPG

1280x720

5

not open

null

Note

L’=left rectify image, L=left image,R’=right rectify image, R=right image, D=depth image

In IR Depth Only mode, framerate only support 15fps and 30fps.

IMU Coordinata System

IMU coordinate system is right-handed,the axis directions are as follows:

_images/mynteye_d_imu_coord.jpg

SDK

SDK Description

Supported Platforms

SDK is built on CMake and can be used cross multiple platforms such as Linux, WIndows,etc.We provide two installation modes:Download pack file and install, Compile and install from source code.

These are the platforms that can be used:

* Windows 10
* Ubuntu 18.04/16.04
* Jetson TX1 TX2 Xavier
* RK3399 firefly
* Raspberry Pi 3B

Tip

Ubuntu only support source installation mode. Only supports 64 bit systems.

Warning

Due to the requirement of hardware transmission rate, please use the USB 3 interface. In addition, virtual machines have USB driver compatibility problems, thus they are not recommended.

SDK Installation

Ubuntu Source Installation

1. Install SDK dependencies
1.1 Install OpenCV

If you have installed opencv already or you want use it in ROS, you can skip this part.

1.1.1 Install OpenCV with apt or compile (Choose one)
1.1.1.1 Install OpenCV with apt (Recommend)
sudo apt-get install libopencv-dev
1.1.1.2 Install OpenCV by Compile

To build and install Opencv, please refer to Installation in Linux

Tip

If you need to install ros, you can skip this step and use opencv in ros.

Alternatively, refer to the command below:

[compiler] sudo apt-get install build-essential
[required] sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
[optional] sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
git clone https://github.com/opencv/opencv.git
cd opencv/
git checkout tags/3.4.3

cd opencv/
mkdir build
cd build/

cmake ..

make -j4
sudo make install
1.2 Install PCL for Point Cloud sample (Optional)

To build and install PCL, please refer to PCL Installation

Tip

If you need to install ros, you can skip this step and use pcl in ros.

sudo apt install -y libboost-all-dev libflann-dev libeigen3-dev libusb-1.0-0-dev libvtk6-dev libproj-dev
git clone https://github.com/PointCloudLibrary/pcl.git
cd pcl
git checkout pcl-1.7.2
mkdir build && cd build

cmake -DCMAKE_BUILD_TYPE=Release ..

make -j2
sudo make -j2 install
2. Build SDK
git clone https://github.com/slightech/MYNT-EYE-D-SDK.git
cd MYNT-EYE-D-SDK
2.1 Init SDK

Note

Because of the problem of device permissions, you must reinsert the camera device after the command is executed and on the same computer, this operation only needs to be done once.

make init
2.2 Compile SDK
make all
3. Run Samples

Note

Open the rectified image by default (Run vio need to raw image, run depth or points cloud need to rectified image.)

  1. get_image shows the left camera image and colorful depthmap (compatible with USB2.0)

./samples/_output/bin/get_image
  1. get_stereo_image shows the left camera image and colorful depthmap

./samples/_output/bin/get_stereo_image
  1. get_depth shows the left camera image, 16UC1 depthmap and depth value(mm) on mouse pointed pixal

./samples/_output/bin/get_depth
  1. get_points shows the left camera image, 16UC1 depthmap and point cloud view

./samples/_output/bin/get_points
  1. get_imu shows motion datas

./samples/_output/bin/get_imu
  1. get_img_params show camera intrinsics and save in file

./samples/_output/bin/get_img_params
  1. get_imu_params show imu intrinsics and save in file

./samples/_output/bin/get_imu_params
  1. get_from_callbacks show image and imu data by callback

./samples/_output/bin/get_from_callbacks
  1. get_all_with_options open device with different options

./samples/_output/bin/get_all_with_options
  1. get_depth_with_filter display filtered depth image

./samples/_output/bin/get_depth_with_filter
  1. get_points_with_filter display filtered point cloud image

./samples/_output/bin/get_points_with_filter
4 Install With OpenCV ROS

If you won’t use ROS(The Robot Operating System), you can skip this part.

ROS installation and operation steps, refer to ROS Wrapper Installation 以及 ROS Wrapper Usage .

5. Package

If you wanna package with specified OpenCV version:

cd <sdk>  # local path of SDK
make cleanall
export OpenCV_DIR=<install prefix>

export OpenCV_DIR=/usr/local
export OpenCV_DIR=$HOME/opencv-2.4.13.3

Packaging:

cd <sdk>  # local path of SDK
make pkg
6. Clean
cd <sdk>  # local path of SDK
make cleanall
make uninstall

Windows Source Installation

The following steps are how to install from source codes. If you wanna using prebuilt DLL, please see Windows EXE Installation .

1. Install Build Tools
1.1 Install Visual Studio

Download Visual Studio 2017 from https://visualstudio.microsoft.com/zh-hans/vs/older-downloads/ and install, select “Desktop development with C++” .

_images/vs_install.png

Tip

support Visual Studio 2015 and Visual Studio 2017.

1.2 Install CMake

Download CMake from https://cmake.org/ and install

1.3 Install MSYS2
  1. Download MSYS2 from http://mirrors.ustc.edu.cn/msys2/distrib/x86_64/ and install

  2. Add bin path to System PATH environment variable list (Add to the PATH on Windows 10)

C:\msys64\usr\bin
  1. Install make , double click msys2.exe , input following command:

pacman -Syu
pacman -S make

Finally, the CMD (Command Prompt) can run the following command:

>make --version
GNU Make 4.2.1
2. Install SDK dependencies
2.1 Install OpenCV
2.1.1 Install OpenCV with Pre-built Libraries (Recommend)

*For more details you can reference OpenCV offical document *

  1. Go to OpenCV Sourceforge page http://sourceforge.net/projects/opencvlibrary/files/opencv-win/

  2. Choose a build you want to use and download it. For example 3.4.3/opencv-3.4.3-vc14_vc15.exe

  3. Make sure you have admin rights. Unpack the self-extracting archive

  4. To finalize the installation, go to set the OpenCV environment variable and add it to the systems path

2.1.2 Set up environment variable
  1. Start up a command window as admin and enter following command to add OPENCV_DIR environment variable:

Change the “D:OpenCV” to your opencv unpack path

setx -m OPENCV_DIR D:\OpenCV\Build\x64\vc14\lib     (suggested for Visual Studio 2015 - 64 bit Windows)
setx -m OPENCV_DIR D:\OpenCV\Build\x64\vc15\lib     (suggested for Visual Studio 2017 - 64 bit Windows)

Or referring to Add to the PATH on Windows 10 to add OPENCV_DIR environment variable manually.

D:\OpenCV\Build\x64\vc14\lib     (suggested for Visual Studio 2015 - 64 bit Windows)
D:\OpenCV\Build\x64\vc15\lib     (suggested for Visual Studio 2017 - 64 bit Windows)
  1. Add OpenCV bin path to System PATH environment variable list

D:\OpenCV\Build\x64\vc14\bin     (suggested for Visual Studio 2015 - 64 bit Windows)
D:\OpenCV\Build\x64\vc15\bin     (suggested for Visual Studio 2017 - 64 bit Windows)
2.2 Install libjpeg-turbo
  1. Download libjpeg-turbo from https://sourceforge.net/projects/libjpeg-turbo/files/ and install

  2. Add bin path to System PATH environment variable list

C:\libjpeg-turbo64\bin
2.3 Install PCL for Point Cloud sample (Optional)

Download All-in-one installers (PCL + dependencies) from: https://github.com/PointCloudLibrary/pcl/releases

3. Build SDK

Open “x64 Native Tools Command Prompt for VS 2017”(suggested for Visual Studio 2017 - 64 bit Windows) command shell

git clone https://github.com/slightech/MYNT-EYE-D-SDK.git
cd MYNT-EYE-D-SDK
make all

Tip

Visual Studio Command Prompt can be opened from the Start menu,

_images/vs_cmd_menu.png

You can also open it from the Visual Studio Tools menu.

_images/vs_cmd.png

However, if you do not have the Visual Studio 2015 Tools menu, you can add one yourself.

Open Tools’s External Tools… and Add the following:

Field

Value

Title

Visual Studio Command Prompt

Command

C:\Windows\System32\cmd.exe

Arguments

/k "C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Tools\VsDevCmd.bat"

Initial Directory

$(SolutionDir)

4. Run Samples

Note: Open the rectified image by default (Run vio need to raw image, run depth or points cloud need to rectified image.)

  1. get_image shows the left camera image and colorful depthmap (compatible with USB2.0)

.\samples\_output\bin\get_image.bat
  1. get_stereo_image shows the left camera image and colorful depthmap

./samples/_output/bin/get_stereo_image.bat
  1. get_depth shows the left camera image, 16UC1 depthmap and depth value(mm) on mouse pointed pixel

.\samples\_output\bin\get_depth.bat
  1. get_points shows the left camera image, 16UC1 depthmap and point cloud view

.\samples\_output\bin\get_points.bat
  1. get_imu shows motion datas

.\samples\_output\bin\get_imu
  1. get_img_params show camera intrinsics and save in file

.\samples\_output\bin\get_img_params
  1. get_imu_params show imu intrinsics and save in file

.\samples\_output\bin\get_imu_params
  1. get_from_callbacks show image and imu data by callback

.\samples\_output\bin\get_from_callbacks
  1. get_all_with_options open device with different options

.\samples\_output\bin\get_all_with_options
  1. get_depth_with_filter display filtered depth image

.\samples\_output\bin\get_depth_with_filter
  1. get_points_with_filter display filtered point cloud image

.\samples\_output\bin\get_points_with_filter
5. Clean
cd <sdk>  # local path of SDK
make cleanall

Windows EXE Installation

Download here: mynteye-d-x.x.x-win-x64-opencv-3.4.3.exe Google Drive, Baidu Pan

After you install the win pack of SDK, there will be a shortcut to the SDK root directory on your desktop.

First, you should plug the MYNT® EYE camera in a USB 3.0 port.

Second, goto the \bin\samples directory and click get_image.exe to run.

Finally, you will see the window that display the realtime frame of the camera.

Note

If you cannot run samples successfully, please check if the system variable PATH was successfully added <SDK_ROOT_DIR>\bin , <SDK_ROOT_DIR>\bin\3rdparty , <SDK_ROOT_DIR>\3rdparty\opencv\build\x64\vc15\bin , <SDK_ROOT_DIR>\3rdparty\libjpeg-turbo64\bin . The point cloud related sample requires PCL 1.9.0. Versions other than 1.9.0 can be run using the sample project.

Generate samples project of Visual Studio 2017

First, you should install Visual Studio 2017 and CMake .

Second, goto the \samples directory and click generate.bat to run.

Finally, you could click _build\mynteye_samples.sln to open the samples project.

p.s. The example result of “generate.bat”,

-- The C compiler identification is MSVC 19.15.26732.1
-- The CXX compiler identification is MSVC 19.15.26732.1
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.15.26726/bin/Hostx86/x64/cl.exe
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.15.26726/bin/Hostx86/x64/cl.exe -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.15.26726/bin/Hostx86/x64/cl.exe
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.15.26726/bin/Hostx86/x64/cl.exe -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- HOST_ARCH: x86_64
-- Visual Studio >= 2010, MSVC >= 10.0
-- C_FLAGS: /DWIN32 /D_WINDOWS /W3 -Wall -O3
-- CXX_FLAGS: /DWIN32 /D_WINDOWS /W3 /GR /EHsc -Wall -O3
-- Found mynteye: 1.3.6
-- OpenCV ARCH: x64
-- OpenCV RUNTIME: vc15
-- OpenCV STATIC: OFF
-- Found OpenCV: C:/Users/John/AppData/Roaming/Slightech/MYNTEYED/SDK/1.3.6/3rdparty/opencv/build (found version "3.4.3")
-- Found OpenCV 3.4.3 in C:/Users/John/AppData/Roaming/Slightech/MYNTEYED/SDK/1.3.6/3rdparty/opencv/build/x64/vc15/lib
-- You might need to add C:\Users\John\AppData\Roaming\Slightech\MYNTEYED\SDK\1.3.6\3rdparty\opencv\build\x64\vc15\bin to your PATH to be able to run your applications.
-- Generating executable get_image
-- Generating get_image.bat
-- Generating executable get_depth
-- Generating get_depth.bat
-- Generating executable get_imu
-- Generating get_imu.bat
-- Configuring done
-- Generating done
CMake Warning:
  Manually-specified variables were not used by the project:

    CMAKE_BUILD_TYPE


-- Build files have been written to: C:/Users/John/AppData/Roaming/Slightech/MYNTEYED/SDK/1.3.6/samples/_build
Press any key to continue . . .

Tip

Right click sample and select Set as StartUp Project,then launch with Release x64 mode.

ROS Wrapper Installation

1.1 Install With OpenCV ROS

If you won’t use ROS(The Robot Operating System), you can skip this part.

ROS Melodic (Ubuntu 18.04)
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116
sudo apt update
sudo apt install ros-melodic-desktop-full
sudo rosdep init
rosdep update
echo "source /opt/ros/melodic/setup.bash" >> ~/.bashrc
source ~/.bashrc
ROS Kinetic (Ubuntu 16.04)
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
sudo apt-get update
sudo apt-get install ros-kinetic-desktop-full
sudo rosdep init
rosdep update
echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
source ~/.bashrc
1.2 Build ROS Wrapper
make init
make ros

Core:

roscore

RViz Display:

source ./wrappers/ros/devel/setup.bash
roslaunch mynteye_wrapper_d display.launch

Publish:

source ./wrappers/ros/devel/setup.bash
roslaunch mynteye_wrapper_d mynteye.launch

ROS Wrapper Usage

Compile and run the node according to ROS Wrapper Installation.

rostopic list lists all released nodes:

/mynteye/depth/camera_info
/mynteye/depth/image_raw
/mynteye/depth/image_raw/compressed
/mynteye/depth/image_raw/compressed/parameter_descriptions
/mynteye/depth/image_raw/compressed/parameter_updates
/mynteye/depth/image_raw/compressedDepth
/mynteye/depth/image_raw/compressedDepth/parameter_descriptions
/mynteye/depth/image_raw/compressedDepth/parameter_updates
/mynteye/depth/image_raw/theora
/mynteye/depth/image_raw/theora/parameter_descriptions
/mynteye/depth/image_raw/theora/parameter_updates
/mynteye/imu/data_raw
/mynteye/imu/data_raw_processed
/mynteye/left/camera_info
/mynteye/left/image_color
/mynteye/left/image_color/compressed
...

rostopic hz <topic> checks the data:

subscribed to [/mynteye/imu/data_raw]
average rate: 202.806
    min: 0.000s max: 0.021s std dev: 0.00819s window: 174
average rate: 201.167
    min: 0.000s max: 0.021s std dev: 0.00819s window: 374
average rate: 200.599
    min: 0.000s max: 0.021s std dev: 0.00819s window: 574
average rate: 200.461
    min: 0.000s max: 0.021s std dev: 0.00818s window: 774
average rate: 200.310
    min: 0.000s max: 0.021s std dev: 0.00818s window: 974
  ...

rostopic echo <topic> can print and release data. Please read rostopic for more information.

The ROS file is structured like follows:

<sdk>/wrappers/ros/
├─src/
  └─mynteye_wrapper_d/
     ├─launch/
     │  ├─display.launch
     │  └─mynteye.launch
     │  └─slam
     │     ├─orb_slam2.launch
     │     └─vins_fusion.launch
     │     └─vins_mono.launch
     ├─msg/
     ├─rviz/
     ├─src/
     │  ├─mynteye_listener.cc
     │  └─mynteye_wrapper_nodelet.cc
     │  └─mynteye_wrapper_node.cc
     │  └─pointcloud_generatort.cc
     │  └─pointcloud_generator.h
     ├─CMakeLists.txt
     ├─nodelet_plugins.xml
     └─package.xml

In mynteye.launch ,you can configure topics and frame_ids ,decide which data to enable, and set the control options.Please refer

to Support Resolutions to set frame rate and resolution. Please set

gravity to the local gravity acceleration.

<!-- Camera Params -->

<!-- Device index -->
<arg name="dev_index" default="0" />
<!-- Framerate -->
<arg name="framerate" default="30" />

<!--
Device mode
  device_color: left_color ✓ right_color ? depth x
  device_depth: left_color x right_color x depth ✓
  device_all:   left_color ✓ right_color ? depth ✓
Note: ✓: available, x: unavailable, ?: depends on #stream_mode
-->
<arg name="dev_mode" default="$(arg device_all)" />

<!-- Set Color Mode form color_raw, color_rectified-->
<arg name="color_mode" default="$(arg color_raw)" />

<!--
Set depth mode
Note: must set DEPTH_RAW to get raw depth values for points
-->
<arg name="depth_mode" default="$(arg depth_raw)" />
<!--
Set resolution from stream_640x480,stream_1280x720,stream_1280x480,stream_2560x720
-->
<arg name="stream_mode" default="$(arg stream_2560x720)" />

<!-- Auto-exposure -->
<arg name="state_ae" default="true" />
<!-- Auto-white balance -->
<arg name="state_awb" default="true" />
<!-- IR intensity -->
<arg name="ir_intensity" default="4" />
<!-- IR Depth Only -->
<arg name="ir_depth_only" default="false" />

<!-- Setup your local gravity here -->
<arg name="gravity" default="9.8" />

SDK Samples

Get Camera Image

Using the DeviceMode::DEVICE_COLOR function of the API, you can get color image,or use DeviceMode::DEVICE_ALL to get color and depth image.

Using GetStreamData() to get your data.

Reference code snippet:

// Device mode, default DEVICE_ALL
//   DEVICE_COLOR: IMAGE_LEFT_COLOR y IMAGE_RIGHT_COLOR - IMAGE_DEPTH n
//   DEVICE_DEPTH: IMAGE_LEFT_COLOR n IMAGE_RIGHT_COLOR n IMAGE_DEPTH y
//   DEVICE_ALL:   IMAGE_LEFT_COLOR y IMAGE_RIGHT_COLOR - IMAGE_DEPTH y
// Note: y: available, n: unavailable, -: depends on #stream_mode
params.dev_mode = DeviceMode::DEVICE_DEPTH;

auto left_color = cam.GetStreamData(ImageType::IMAGE_LEFT_COLOR);
    if (left_color.img) {
    cv::Mat left = left_color.img->To(ImageFormat::COLOR_BGR)->ToMat();
    painter.DrawSize(left, CVPainter::TOP_LEFT);
    painter.DrawStreamData(left, left_color, CVPainter::TOP_RIGHT);
    painter.DrawInformation(left, util::to_string(counter.fps()),
        CVPainter::BOTTOM_RIGHT);
    cv::imshow("left color", left);

Complete code samples,see get_stereo_image.cc .

Get Camera Image(Compatible With USB2.0)

Compatible with USB2.0 ,change to the resolution and frame rate for USB 2.0 automatically.Using the DeviceMode::DEVICE_COLOR function of the API, you can get color image,or use DeviceMode::DEVICE_ALL to get color and depth image.

Using GetStreamData() to get your data.

Reference code snippet:

// Device mode, default DEVICE_ALL
//   DEVICE_COLOR: IMAGE_LEFT_COLOR y IMAGE_RIGHT_COLOR - IMAGE_DEPTH n
//   DEVICE_DEPTH: IMAGE_LEFT_COLOR n IMAGE_RIGHT_COLOR n IMAGE_DEPTH y
//   DEVICE_ALL:   IMAGE_LEFT_COLOR y IMAGE_RIGHT_COLOR - IMAGE_DEPTH y
// Note: y: available, n: unavailable, -: depends on #stream_mode
params.dev_mode = DeviceMode::DEVICE_DEPTH;

auto left_color = cam.GetStreamData(ImageType::IMAGE_LEFT_COLOR);
    if (left_color.img) {
    cv::Mat left = left_color.img->To(ImageFormat::COLOR_BGR)->ToMat();
    painter.DrawSize(left, CVPainter::TOP_LEFT);
    painter.DrawStreamData(left, left_color, CVPainter::TOP_RIGHT);
    painter.DrawInformation(left, util::to_string(counter.fps()),
        CVPainter::BOTTOM_RIGHT);
    cv::imshow("left color", left);

Complete code samples,see get_image.cc .

Get Depth Image

Depth images belongs to the upper layer of synthetic data.

You can change depth_mode to change the display of the depth image.

// Depth mode: colorful(default), gray, raw
params.depth_mode = DepthMode::DEPTH_RAW;

Then you can get it through GetStreamData().In addition, it should be check not be empty before use.

Reference code snippet:

auto image_depth = cam.GetStreamData(ImageType::IMAGE_DEPTH);
if (image_depth.img) {
  cv::Mat depth = image_depth.img->To(ImageFormat::DEPTH_RAW)->ToMat();

  cv::setMouseCallback("depth", OnDepthMouseCallback, &depth_region);
  // Note: DrawRect will change some depth values to show the rect.
  depth_region.DrawRect(depth);
  cv::imshow("depth", depth);

  depth_region.ShowElems<ushort>(depth, [](const ushort& elem) {
    return std::to_string(elem);
  }, 80, depth_info);
}

The above code uses OpenCV to display the image. When the display window is selected, pressing ESC/Q will end the program.

Note

get_depth sample only support DEPTH_RAW mode.You can modify depth_mode parameter of other samples to get depth images 。

Complete code examples, see get_depth.cc.

Get Point Image

Point images belongs to upper layer of synthetic data.You can get it through GetStreamData().It should be check not empty before use. Otherwise, when running pionts,you can use “space” to save .ply files. Then sample view_points can be used to view .ply files.

Sample code snippet:

auto image_color = cam.GetStreamData(ImageType::IMAGE_LEFT_COLOR);
auto image_depth = cam.GetStreamData(ImageType::IMAGE_DEPTH);
if (image_color.img && image_depth.img) {
    cv::Mat color = image_color.img->To(ImageFormat::COLOR_BGR)
        ->ToMat();
    painter.DrawSize(color, CVPainter::TOP_LEFT);
    painter.DrawStreamData(color, image_color, CVPainter::TOP_RIGHT);
    painter.DrawInformation(color, util::to_string(counter.fps()),
        CVPainter::BOTTOM_RIGHT);

    cv::Mat depth = image_depth.img->To(ImageFormat::DEPTH_RAW)
        ->ToMat();

    cv::imshow("color", color);

    viewer.Update(color, depth);
}

PCL is used to display point images above. Program will close when point image window is closed.

Complete code examples, see get_points.cc.

Get IMU Data

You need EnableMotionDatas()to enable caching in order to get IMU data from GetMotionDatas().Otherwise, IMU data is only available through the callback interface, see Get Data From Callbacks.

Sample code snippet:

auto motion_datas = cam.GetMotionDatas();
if (motion_datas.size() > 0) {
    std::cout << "Imu count: " << motion_datas.size() << std::endl;
    for (auto data : motion_datas) {
        if (data.imu) {
            if (data.imu->flag == MYNTEYE_IMU_ACCEL) {
                counter.IncrAccelCount();
                std::cout << "[accel] stamp: " << data.imu->timestamp
                    << ", x: " << data.imu->accel[0]
                    << ", y: " << data.imu->accel[1]
                    << ", z: " << data.imu->accel[2]
                    << ", temp: " << data.imu->temperature
                    << std::endl;
            } else if (data.imu->flag == MYNTEYE_IMU_GYRO) {
                counter.IncrGyroCount();
                std::cout << "[gyro] stamp: " << data.imu->timestamp
                << ", x: " << data.imu->gyro[0]
                << ", y: " << data.imu->gyro[1]
                << ", z: " << data.imu->gyro[2]
                << ", temp: " << data.imu->temperature
                << std::endl;
            } else {
                std::cerr << "Imu type is unknown" << std::endl;
            }
        } else {
          std::cerr << "Motion data is empty" << std::endl;
        }
    }
    std::cout << std::endl;
}

OpenCV is used to display image and data. When window is selected, press ESC/Q to exit program.

Complete code examples, see get_imu.cc.

Get Data From Callbacks

API offers function SetStreamCallback() and SetMotionCallback() to set callbacks for various data.

Reference code snippet:

cam.SetImgInfoCallback([](const std::shared_ptr<ImgInfo>& info) {
    std::cout << "  [img_info] fid: " << info->frame_id
        << ", stamp: " << info->timestamp
        << ", expos: " << info->exposure_time << std::endl
        << std::flush;
});
for (auto&& type : types) {
    // Set stream data callback
    cam.SetStreamCallback(type, [](const StreamData& data) {
    std::cout << "  [" << data.img->type() << "] fid: "
        << data.img->frame_id() << std::endl
        << std::flush;
    });
}

// Set motion data callback
cam.SetMotionCallback([](const MotionData& data) {
    if (data.imu->flag == MYNTEYE_IMU_ACCEL) {
        std::cout << "[accel] stamp: " << data.imu->timestamp
        << ", x: " << data.imu->accel[0]
        << ", y: " << data.imu->accel[1]
        << ", z: " << data.imu->accel[2]
        << ", temp: " << data.imu->temperature
        << std::endl;
    } else if (data.imu->flag == MYNTEYE_IMU_GYRO) {
        std::cout << "[gyro] stamp: " << data.imu->timestamp
        << ", x: " << data.imu->gyro[0]
        << ", y: " << data.imu->gyro[1]
        << ", z: " << data.imu->gyro[2]
        << ", temp: " << data.imu->temperature
        << std::endl;
    }
    std::cout << std::flush;
});

OpenCV is used to display images and data above. When the window is selected, pressing ESC/Q will exit program.

Complete code examples, see get_from_callbacks.cc.

Get Different Types Of Image By Options

get_all_with_options sample can add different options to control device.

get_all_with_options -h:

Open device with different options.

Options:
  -h, --help            show this help message and exit
  -m, --imu             Enable imu datas

  Open Params:
    The open params

    -i INDEX, --index=INDEX
                        Device index
    -f RATE, --rate=RATE
                        Framerate, range [0,60], [30](STREAM_2560x720),
                        default: 10
    --dev-mode=MODE       Device mode, default 2 (DEVICE_ALL)
                          0: DEVICE_COLOR, left y right - depth n
                          1: DEVICE_DEPTH, left n right n depth y
                          2: DEVICE_ALL,   left y right - depth y
                          Note: y: available, n: unavailable, -: depends on
                        stream mode
    --cm=MODE           Color mode, default 0 (COLOR_RAW)
                          0: COLOR_RAW, color raw
                          1: COLOR_RECTIFIED, color rectified
    --dm=MODE           Depth mode, default 2 (DEPTH_COLORFUL)
                          0: DEPTH_RAW
                          1: DEPTH_GRAY
                          2: DEPTH_COLORFUL
    --sm=MODE           Stream mode of color & depth,
                        default 2 (STREAM_1280x720)
                          0: STREAM_640x480, 480p, vga, left
                          1: STREAM_1280x480, 480p, vga, left+right
                          2: STREAM_1280x720, 720p, hd, left
                          3: STREAM_2560x720, 720p, hd, left+right
    --csf=MODE          Stream format of color,
                        default 1 (STREAM_YUYV)
                          0: STREAM_MJPG
                          1: STREAM_YUYV
    --dsf=MODE          Stream format of depth,
                        default 1 (STREAM_YUYV)
                          1: STREAM_YUYV
    --ae                Enable auto-exposure
    --awb               Enable auto-white balance
    --ir=VALUE          IR intensity, range [0,6], default 0
    --ir-depth          Enable ir-depth-only

  Feature Toggles:
    The feature toggles

    --proc=MODE         Enable process mode, e.g. imu assembly, temp_drift
                          0: PROC_NONE
                          1: PROC_IMU_ASSEMBLY
                          2: PROC_IMU_TEMP_DRIFT
                          3: PROC_IMU_ALL
    --img-info          Enable image info, and sync with image

e.g. ./samples/_output/bin/get_all_with_options -f 60 --dev-mode=0 --sm=2 displays 1280x720 60fps left unrectified image.

Complete code samples,see get_all_with_options.cc .

Get Image Calibration Parameters

Use GetStreamIntrinsics() and GetStreamExtrinsics() to get image calibration parameters.

Reference code snippet:

auto vga_intrinsics = cam.GetStreamIntrinsics(StreamMode::STREAM_1280x480, &in_ok);
auto vga_extrinsics = cam.GetStreamExtrinsics(StreamMode::STREAM_1280x480, &ex_ok);
std::cout << "VGA Intrinsics left: {" << vga_intrinsics.left << "}" << std::endl;
std::cout << "VGA Intrinsics right: {" << vga_intrinsics.right << "}" << std::endl;
std::cout << "VGA Extrinsics left to right: {" << vga_extrinsics << "}" << std::endl;
out << "VGA Intrinsics left: {" << vga_intrinsics.left << "}" << std::endl;
out << "VGA Intrinsics right: {" << vga_intrinsics.right << "}" << std::endl;
out << "VGA Extrinsics left to right: {" << vga_extrinsics << "}" << std::endl;

The result will be saved in the current file directory.Reference result on Linux:

VGA Intrinsics left: {width: [640], height: [480], fx: [358.45721435546875000], fy: [359.53115844726562500], cx: [311.12109375000000000], cy: [242.63494873046875000]coeffs: [-0.28297042846679688, 0.06178283691406250, -0.00030517578125000, 0.00218200683593750, 0.00000000000000000]}
VGA Intrinsics right: {width: [640], height: [480], fx: [360.13885498046875000], fy: [360.89624023437500000], cx: [325.11029052734375000], cy: [251.46371459960937500]coeffs: [-0.30667877197265625, 0.08611679077148438, -0.00030136108398438, 0.00155639648437500, 0.00000000000000000]}
VGA Extrinsics left to right: {rotation: [0.99996054172515869, 0.00149095058441162, 0.00875246524810791, -0.00148832798004150, 0.99999880790710449, -0.00030362606048584, -0.00875294208526611, 0.00029063224792480, 0.99996161460876465], translation: [-120.36341094970703125, 0.00000000000000000, 0.00000000000000000]}

Note

In the parameters: Intrinsics provide values for fx , fy , cx , cy , then you can get intrinsic camera matrix (refer to sensor_msgs/CameraInfo.msg ), distortion parameters coeffs contains values for k1 , k2 , p1 , p2 , k3 . Extrinsics contains rotation matrix rotation , Translation matrix translation .

Complete code examples, see get_img_params.cc .

Get IMU Calibration Parameters

Use GetMotionIntrinsics() and GetMotionExtrinsics to get current IMU calibration parameters.

Reference code snippet:

auto intrinsics = cam.GetMotionIntrinsics(&in_ok);
std::cout << "Motion Intrinsics: {" << intrinsics << "}" << std::endl;
out << "Motion Intrinsics: {" << intrinsics << "}" << std::endl;

The result will be saved in the current file directory.Reference result on Linux:

Motion Intrinsics: {accel: {scale: [1.00205999990004191, 0.00000000000000000, 0.00000000000000000, 0.00000000000000000, 1.00622999999999996, 0.00000000000000000, 0.00000000000000000, 0.00000000000000000, 1.00171999999999994], assembly: [1.00000000000000000, 0.00672262000000000, -0.00364474000000000, 0.00000000000000000, 1.00000000000000000, 0.00101348000000000, -0.00000000000000000, 0.00000000000000000, 1.00000000000000000, 1.00000000000000000], drift: [0.00000000000000000, 0.00000000000000000, 0.00000000000000000], noise: [0.00000000000000000, 0.00000000000000000, 0.00000000000000000], bias: [0.00000000000000000, 0.00000000000000000, 0.00000000000000000], x: [0.00856165620000000, -0.00009840052800000], y: [0.05968393300000000, -0.00130967680000000], z: [0.01861442050000000, -0.00016033523000000]}, gyro: {scale: [1.00008999999999992, 0.00000000000000000, 0.00000000000000000, 0.00000000000000000, 0.99617599999999995, 0.00000000000000000, 0.00000000000000000, 0.00000000000000000, 1.00407000000000002], assembly: [1.00000000000000000, -0.00700362000000000, -0.00326206000000000, 0.00549571000000000, 1.00000000000000000, 0.00224867000000000, 0.00236088000000000, 0.00044507800000000, 1.00000000000000000, 1.00000000000000000], drift: [0.00000000000000000, 0.00000000000000000, 0.00000000000000000], noise: [0.00000000000000000, 0.00000000000000000, 0.00000000000000000], bias: [0.00000000000000000, 0.00000000000000000, 0.00000000000000000], x: [0.18721455299999998, 0.00077411070000000], y: [0.60837032000000002, -0.00939702710000000], z: [-0.78549276000000001, 0.02584820200000000]}}

Complete code examples, see get_imu_params.cc .

Set Open Parameters

Set the resolution of image

Using the params.stream_mode parameter, you can set the resolution of the image.

Attention: Now image resolution supports 4 types: 640X480,1280x720 for single camera. 1280x480, 2560x720 for left and right camera.

Reference code snippet:

// Stream mode: left color only
// params.stream_mode = StreamMode::STREAM_640x480;  // vga
// params.stream_mode = StreamMode::STREAM_1280x720;  // hd
// Stream mode: left+right color
// params.stream_mode = StreamMode::STREAM_1280x480;  // vga
params.stream_mode = StreamMode::STREAM_2560x720;  // hd
Set the frame rate of image

Using the params.framerate parameter, you can set the frame rate of image.

Note

The effective fps of the image(0-60) - The effective fps of the image in 2560x720 resolution (30)

Reference code snippet:

// Framerate: 30(default), [0,60], [30](STREAM_2560x720)
params.framerate = 30;
Set color mode

Using the params.color_mode parameter,you can set the color mode of image.

COLOR_RAW is original image, COLOR_RECTIFIED is rectified image.

Reference code snippet:

// Color mode: raw(default), rectified
// params.color_mode = ColorMode::COLOR_RECTIFIED;
Set depth mode

Using the params.depth_mode parameter,you can set the depth mode.

DEPTH_COLORFUL is colorful depth image,DEPTH_GRAY is grey depth image, DEPTH_RAW is original depth image。

Reference code snippet:

// Depth mode: colorful(default), gray, raw
// params.depth_mode = DepthMode::DEPTH_GRAY;
Enable auto exposure and auto white balance

Set params.state_ae and params.state_awb to true , you can enable auto exposure and auto white balance.

By default auto exposure and auto white balance are enabled,if you want to disable,you can set parameters to false .

Reference code snippet:

// Auto-exposure: true(default), false
// params.state_ae = false;

// Auto-white balance: true(default), false
// params.state_awb = false;
Enable IR and its adjustments function

Using the params.ir_intensity parameter,you can set IR’s intensity of image. Enabling IR is setting params.ir_intensity greater than 0. The greater the value, the greater the IR’s intensity.(max is 10).

Reference code snippet:

// Infrared intensity: 0(default), [0,10]
params.ir_intensity = 4;

Note

After turning this fuction on, you can see ir pattern:

_images/ir.png
Enable IR Depth Only

Using the params.ir_depth_only parameter,you can set IR Depth Only function. This is disabled by default. After turning this function on, IR only works on depth images. IR pattern will not show in color images.

Note

This feature is only available for [2560x720 30fps] and [1280x720,1280x480,640x480 60fps] After turning this function on, frame rate will be divided equally. For example, when set frame rate of image to 30 fps, the frame rate of color image is 15 fps. The frame rate of depth image is 15 fps too. This feature will cause the left eye image and depth map to be out of sync.

Reference code snippet:

// IR Depth Only: true, false(default)
// Note: IR Depth Only mode support frame rate between 15fps and 30fps.
//     When dev_mode != DeviceMode::DEVICE_ALL,
//       IR Depth Only mode not be supported.
//     When stream_mode == StreamMode::STREAM_2560x720,
//       frame rate only be 15fps in this mode.
//     When frame rate less than 15fps or greater than 30fps,
//       IR Depth Only mode will be not available.
// params.ir_depth_only = false;
Adjust colour depth value

Using the params.colour_depth_value parameter, The value is 5000 by default.

Reference code snippet:

// Colour depth image, default 5000. [0, 16384]
// params.colour_depth_value = 5000;

Reference running results on Linux:

Open device: 0, /dev/video1

D/eSPDI_API: SetPropertyValue control=7 value=0D/eSPDI_API: SetPropertyValue control=7 value=35D/eSPDI_API: SetPropertyValue control=7 value=1-- Auto-exposure state: enabled
D/eSPDI_API: SetPropertyValue control=7 value=0D/eSPDI_API: SetPropertyValue control=7 value=12D/eSPDI_API: SetPropertyValue control=7 value=1-- Auto-white balance state: enabled
-- Framerate: 5
D/eSPDI_API: SetPropertyValue control=7 value=4 SetDepthDataType: 4
-- Color Stream: 1280x720 YUYV
-- Depth Stream: 1280x720 YUYV

D/eSPDI_API: SetPropertyValue control=7 value=0D/eSPDI_API: SetPropertyValue control=7 value=3D/eSPDI_API: SetPropertyValue control=7 value=4
-- IR intensity: 4
D/eSPDI_API: CVideoDevice::OpenDevice 1280x720 fps=5

Open device success

Note

After changing the parameters, you need to run in the sdk directory

make samples

to make the set parameters take effect.

Complete code samples,see get_image.cc .

Camera Control Parameters API

Open or close auto exposure
/** Auto-exposure enabled or not  default enabled*/
bool AutoExposureControl(bool enable);    see "camera.h"
Open or close auto white balance
/** Auto-white-balance enabled or not  default enabled*/
bool AutoWhiteBalanceControl(bool enable);    see "camera.h"
Set infrared(IR) intensity
/** set infrared(IR) intensity [0, 10] default 4*/
void SetIRIntensity(const std::uint16_t &value);     see "camera.h"
Set global gain

Note

You have to close auto exposure first after opening camera.

/** Set global gain [1 - 16]
 * value -- global gain value
 * */
void SetGlobalGain(const float &value);    see "camera.h"
Set the exposure time

Note

You have to close auto exposure first after opening camera.

/** Set exposure time [1ms - 655ms]
 * value -- exposure time value
 * */
void SetExposureTime(const float &value);    see "camera.h"

Reference code snippet:

cam.Open(params);
cam.AutoExposureControl(false);
cam.SetGlobalGain(1);
cam.SetExposureTime(0.3);

Note

After changing the parameters, you need to run in the sdk directory

make samples

to make the set parameters take effect.

User filter to filter deep data

Filter type is inherited from BaseFilter .

Method port protocol is as follows:

virtual bool ProcessFrame(
      std::shared_ptr<Image> out,
      const std::shared_ptr<Image> in) = 0; // NOLINT
virtual bool LoadConfig(void* data);

inline bool TurnOn();
inline bool TurnOff();
inline bool IsEnable();


int main(int argc, char const* argv[]) {
...

SpatialFilter spat_filter;
TemporalFilter temp_filter;

...
for (;;) {
  // get frame
  ...
  spat_filter.ProcessFrame(image_depth.img, image_depth.img);
  temp_filter.ProcessFrame(image_depth.img, image_depth.img);
  ...
}

Tip

When using, instantiate a Filter ,then use it directly in the image processing loop ProcessFrame . The image will adapt to the image infoemation in real time. You can also use the TurnOn/TurnOff switch in real time.

SDK Tools

Analyze IMU Data

The SDK provides the script imu_analytics.py for IMU analysis. The tool details can be seen in tools/README.md .

Note

You need to use record tool in tools or rosbag to record dataset first. IMU analysis tool support python 2.7 . Before run the script, you need to pip install -r requirements.txt .

Reference to run commands on Linux:

$ python tools/analytics/imu_analytics.py -i dataset -c tools/config/mynteye/mynteye_config.yaml -al=-1.2,1.2 -gl= -gdu=d -gsu=d -kl=

Reference to results on Linux:

$ python tools/analytics/imu_analytics.py -i dataset -c tools/config/mynteye/mynteye_config.yaml -al=-1.2,1.2 -gl= -gdu=d -gsu=d -kl=
imu analytics ...
  input: dataset
  outdir: dataset
  gyro_limits: None
  accel_limits: [(-1.2, 1.2), (-1.2, 1.2), (-1.2, 1.2), (-1.2, 1.2)]
  time_unit: None
  time_limits: None
  auto: False
  gyro_show_unit: d
  gyro_data_unit: d
  temp_limits: None
open dataset ...
  imu: 20040, temp: 20040
  timebeg: 4.384450, timeend: 44.615550, duration: 40.231100
save figure to:
  dataset/imu_analytics.png
imu analytics done

The analysis result graph will be saved in the data set directory. as follows:

imu analytics

In addition, the script specific options can be executed -h:

$ python tools/analytics/imu_analytics.py -h

Analyze Time Stamps

SDK provides a script for timestamp analysis stamp_analytics.py . Tool details are visible in tools/README.md .

Note

You need to use record tool in tools or rosbag to record dataset first. Timestamp analysis tool support python 2.7 . Before run the script, you need to pip install -r requirements.txt .

Reference run commands on Linux:

$ python tools/analytics/stamp_analytics.py -i dataset -c tools/config/mynteye/mynteye_config.yaml

Reference to results on Linux:

$ python tools/analytics/stamp_analytics.py -i dataset -c tools/config/mynteye/mynteye_config.yaml
stamp analytics ...
  input: dataset
  outdir: dataset
open dataset ...
save to binary files ...
  binimg: dataset/stamp_analytics_img.bin
  binimu: dataset/stamp_analytics_imu.bin
  img: 1007, imu: 20040

rate (Hz)
  img: 25, imu: 500
sample period (s)
  img: 0.04, imu: 0.002

diff count
  imgs: 1007, imus: 20040
  imgs_t_diff: 1006, imus_t_diff: 20039

diff where (factor=0.1)
  imgs where diff > 0.04*1.1 (0)
  imgs where diff < 0.04*0.9 (0)
  imus where diff > 0.002*1.1 (0)
  imus where diff < 0.002*0.9 (0)

image timestamp duplicates: 0

save figure to:
  dataset/stamp_analytics.png
stamp analytics done

The analysis result graph will be saved in the dataset directory. as follow:

stamp analytics

In addition, the script specific options can be executed -h to understand:

$ python tools/analytics/stamp_analytics.py -h

Record Data Sets

The SDK provides the tool record for recording data sets. Tool details can be seen in tools/README.md .

Reference run command on Linux:

./tools/_output/bin/dataset/record

Reference run command on Windows:

.\tools\_output\bin\dataset\record.bat

Reference run results on Linux:

$ ./tools/_output/bin/dataset/record
Saved 1007 imgs, 20040 imus to ./dataset
I0513 21:29:38.608772 11487 record.cc:118] Time beg: 2018-05-13 21:28:58.255395, end: 2018-05-13 21:29:38.578696, cost: 40323.3ms
I0513 21:29:38.608853 11487 record.cc:121] Img count: 1007, fps: 24.9732
I0513 21:29:38.608873 11487 record.cc:123] Imu count: 20040, hz: 496.983

Results save into <workdir>/dataset by default. You can also add parameter, select other directory to save.

Record contents:

<workdir>/
└─dataset/
   ├─left/
   │  ├─stream.txt  # Image infomation
   │  └─...
   ├─right/
   │  ├─stream.txt  # Image information
   │  └─...
   └─motion.txt  # IMU information

Tip

When recording data, dataset.cc has annotated display image inside cv::imwrite() . Because these operations are time-consuming, they can cause images to be discarded. In other words, consumption can’t keep up with production, so some images are discarded. GetStreamDatas() used in record.cc only caches the latest 4 images.

Save Device Infomation And Parameters

The SDK provides a tool save_all_infos for save information and parameters.

Reference commands:

./tools/_output/bin/writer/save_all_infos
# Windows
.\tools\_output\bin\writer\save_all_infos.bat

Reference result on Linux:

I/eSPDI_API: eSPDI: EtronDI_Init
Device descriptors:
  name: MYNT-EYE-D1000
  serial_number: 203837533548500F002F0028
  firmware_version: 1.0
  hardware_version: 2.0
  spec_version: 1.0
  lens_type: 0000
  imu_type: 0000
  nominal_baseline: 120

Result save into <workdir>/config by default. You can also add parameters to select other directory for save.

Saved contents:

<workdir>/
└─config/
   └─SN0610243700090720/
      ├─device.info
      └─imu.params

Complete code samples,see save_all_infos.cc .

Write IMU Parameters

SDK provides the tool imu_params_writer to write IMU parameters.

Information about how to get IMU parameters, please read Get IMU Calibration Parameters .

Reference commands:

./tools/_output/bin/writer/imu_params_writer tools/writer/config/imu.params

# Windows
.\tools\_output\bin\writer\imu_params_writer.bat tools\writer\config\imu.params

The path of parameters file can be found in tools/writer/config/imu.params . If you calibrated the parameters yourself, you can edit the file and run above commands to write them into the device.

Warning - Please don’t override parameters, you can use save_all_infos to backup parameters.

Complete code samples,see imu_params_writer.cc .

SDK Project Demos

How to use SDK with Visual Studio 2017

This tutorial will create a project with Visual Studio 2017 to start using SDK.

You could find the project demo in <sdk>/platforms/projects/vs2017 directory.

Preparation
  • Windows: install the win pack of SDK

Create Project

Open Visual Studio 2017, then File > New > Project,

_images/1_new_pro.png

Select “Windows Console Application”, set the project’s name and location, click “OK”,

_images/2_new_pro.png

Finally, you will see the new project like this,

_images/3_new_pro.png
Config Properties

Right click the project, and open its “Properties” window,

_images/4_config.png

Change “Configuration” to “All Configurations”, then add the following paths to “Additional Include Directories”,

$(MYNTEYED_SDK_ROOT)\include
$(MYNTEYED_SDK_ROOT)\3rdparty\opencv\build\include
_images/5_config_include.png

Add the following definitions to “Preprocessor Definitions”,

WITH_OPENCV
WITH_OPENCV3
_images/6_config_definition.png

Add the following paths to “Additional Library Directories”,

$(MYNTEYED_SDK_ROOT)\lib
$(MYNTEYED_SDK_ROOT)\3rdparty\opencv\build\x64\vc15\lib
_images/7_config_lib_dir.png

Add the following libs to “Additional Dependencies”,

mynteye_depth.lib
opencv_world343.lib
_images/8_config_lib.png
Start using SDK

Include the headers of SDK and start using its APIs,

_images/9_run_x64.png

Select “Release x64” to run the project.

How to use SDK with Qt Creator

This tutorial will create a Qt project with Qt Creator to start using SDK.

You could find the project demo in <sdk>/platforms/projects/qtcreator directory.

Preparation
  • Windows: install the win pack of SDK

  • Linux: build from source and make install

Create Project

Open Qt Creator, then New Project,

_images/new_project.png

Choose Qt Widgets Application,

_images/new_project2.png

Set project location and its name,

_images/new_project3.png

Select the build kits,

_images/new_project4.png

Then, it will generate the skeleton source files,

_images/new_project5.png _images/new_project6.png

Finally, you will see the new project like this,

_images/new_project7.png
Config Project

Edit mynteyed_demo.pro to add INCLUDEPATH and LIBS.

win32 {
    SDK_ROOT = "$$(MYNTEYED_SDK_ROOT)"
    isEmpty(SDK_ROOT) {
        error( "MYNTEYED_SDK_ROOT not found, please install SDK firstly" )
    }
    message("SDK_ROOT: $$SDK_ROOT")

    INCLUDEPATH += "$$SDK_ROOT/include"
    LIBS += "$$SDK_ROOT/lib/mynteye_depth.lib"
}

unix {
    INCLUDEPATH += /usr/local/include
    LIBS += -L/usr/local/lib -lmynteye_depth
}
Start using SDK

Include the headers of SDK and start using its APIs, could see the project demo.

Windows

Should select “Release” to run the project.

_images/release_run.png

Then you will see the main window,

_images/run_win.png
Linux

Run the project and you will see the main window,

_images/run_linux.png

How to use SDK with CMake

This tutorial will create a project with CMake to start using SDK.

You could find the project demo in <sdk>/platforms/projects/cmake directory.

Preparation
  • Windows: install the win pack of SDK

  • Linux: build from source and make install

Create Project

Add CMakeLists.txt and mynteyed_demo.cc files,

cmake_minimum_required(VERSION 3.0)

project(mynteyed_demo VERSION 1.0.0 LANGUAGES C CXX)

# flags

set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall -O3")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -O3")

set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -std=c++11 -march=native")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -march=native")

## mynteyed_demo

add_executable(mynteyed_demo mynteyed_demo.cc)
Config Project

Add mynteyed and OpenCV packages to CMakeLists.txt,

# packages

if(MSVC)
  set(SDK_ROOT "$ENV{MYNTEYED_SDK_ROOT}")
  if(SDK_ROOT)
    message(STATUS "MYNTEYED_SDK_ROOT: ${SDK_ROOT}")
    list(APPEND CMAKE_PREFIX_PATH
      "${SDK_ROOT}/lib/cmake"
      "${SDK_ROOT}/3rdparty/opencv/build"
    )
  else()
    message(FATAL_ERROR "MYNTEYED_SDK_ROOT not found, please install SDK firstly")
  endif()
endif()

## mynteyed

find_package(mynteyed REQUIRED)
message(STATUS "Found mynteye: ${mynteyed_VERSION}")

# When SDK build with OpenCV, we can add WITH_OPENCV macro to enable some
# features depending on OpenCV, such as ToMat().
if(mynteyed_WITH_OPENCV)
  add_definitions(-DWITH_OPENCV)
endif()

## OpenCV

# Set where to find OpenCV
#set(OpenCV_DIR "/usr/share/OpenCV")

# When SDK build with OpenCV, we must find the same version here.
find_package(OpenCV REQUIRED)
message(STATUS "Found OpenCV: ${OpenCV_VERSION}")

Add include_directories and target_link_libraries to mynteyed_demo target,

# targets

include_directories(
  ${OpenCV_INCLUDE_DIRS}
)

## mynteyed_demo

add_executable(mynteyed_demo mynteyed_demo.cc)
target_link_libraries(mynteyed_demo mynteye_depth ${OpenCV_LIBS})
Start using SDK

Include the headers of SDK and start using its APIs, could see the project demo.

Windows

See Windows Source Installation to “Install Build Tools”.

Then open “x64 Native Tools Command Prompt for VS 2017” command shell to build and run,

mkdir _build
cd _build

cmake -G "Visual Studio 15 2017 Win64" ..

msbuild.exe ALL_BUILD.vcxproj /property:Configuration=Release

.\Release\mynteyed_demo.exe
Linux

Open “Terminal” to build and run,

mkdir _build
cd _build/

cmake ..

make

./mynteyed_demo

Change Log

2019-08-26 v1.8.0

  1. Optimize the synchronization of image and imu. (need update auxiliary chip firmware to 1.4)

  2. Fix get imu params issue on Windows

  3. Add 4.16+ kernel support on Ubuntu

  4. Add armhf 32-bit support on Ubuntu

2019-07-09 v1.7.9

  1. Fix publish timestamp is inconsistent if no camera information.

  2. Fix timestamp errors caused by incorrect ROS timestamp data types.

  3. Fix long running ROS timestamp problem. (need update auxiliary chip firmware to 1.3)

  4. Fix the feature of get_all_with_option ir_depth_only is invalid.

2019-06-25 v1.7.8

  1. Add spatial filter.

  2. Add temporal filter.

  3. Fix a windows complie error.

2019-05-29 v1.7.7

  1. Add relink function.

  2. Add ros wrapper independent compilations.

2019-04-26 v1.7.6

  1. Fix ir_depth_only no depth image issue.

  2. Fix point cloud jitter issue for ros display.

2019-04-17 v1.7.5

  1. Remove beta_ros wrapper.

  2. Publish default camera info for beta device.

  3. Add view point cloud ply file sample.

  4. Add slam launch to ros wrapper.

  5. Fix color anomaly issue for ros display.

2019-03-25 v1.7.4

  1. Fix compatibility problem of different devices in ros camera info.

  2. Fix build problem when use specify opencv version under Ubuntu 18.

2019-03-18 v1.7.3

  1. Add support for external sensors (ultrasonic sensors, GPS).

  2. Depth images and color images are synchronized by frame id.

  3. Add sample which compatible with USB2.0.

  4. Fix the problem that the frame rate of camera info released by left and right eyes under ROS is twice the normal value.

  5. Document optimization.

Android SDK

SDK Explain

Supported Platform

Android 5.x ~ Android 8.x

SDK Download

zip format:apk, aar, demo, doc

SDK Install

  1. Get sdk resource SDK Download

  2. New android project(example: Android Studio)

  3. Put the aar file of SDK to “libs” directory (app / libs)

  4. Add arr support to the “build.gradle” file, like below

dependencies {
    implementation fileTree(include: ['*.aar'], dir: 'libs')
    ....
}
  1. Build –> Make Project

SDK Samples

Get SDK info

SDK compile version
MYNTCamera.getSDKVersion();
SDK compile time
MYNTCamera.getSDKBuildTime();

Listener USB

Initialize USB Monitor
mUSBMonitor = USBMonitor(mContext, object : USBMonitor.IUSBMonitorListener {

    override fun didAttach(camera: MYNTCamera) {
        // Insert equipment
    }

    override fun didDettach(camera: MYNTCamera) {
        // Pull out equipment
    }

    override fun didConnectedCamera(camera: MYNTCamera) {
        // Connection successful
    }

    override fun didDisconnectedCamera(camera: MYNTCamera) {
        // Disconnect equipment
    }

})
Register USB Monitor (start listening USB)
mUSBMonitor?.register()
Log out of USB Monitor (stop listening for USB)
mUSBMonitor?.unregister()
Release USB Monitor
mUSBMonitor?.destroy()

Open camera

Set the Camera connection callback
mCamera?.setCameraListener(object : MYNTCamera.ICameraListener {

    override fun didConnectedCamera(camera: MYNTCamera?) {

    }

    override fun didDisconnectedCamera(camera: MYNTCamera?) {

    }
})
Connect the camera (the permission dialog box will pop up)
mCamera?.connect()
After the connection is successful, the data is retrieved
//Open equipment
mCamera?.open()
// Set IR value
mCamera?.irCurrentValue = IR_DEFAULT_VALUE

backgroundHandler?.post {
    if (mCamera == null) return@post
    // Color image previews related objects
    mColorSurface = Surface(colorTextureView.surfaceTexture)
    // Depth image previews related objects
    mDepthSurface = Surface(depthTextureView.surfaceTexture)

    mCamera?.setPreviewDisplay(mDepthSurface, MYNTCamera.Frame.DEPTH)
    mCamera?.setPreviewDisplay(mColorSurface, MYNTCamera.Frame.COLOR)
    // Set preview size (480 / 720)
    mCamera?.setPreviewSize(previewSize.height)
    // Set depth type( 8bit / 11bit)
    mCamera?.setDepthType(depthType)
    // Set the image callback
    mCamera?.setFrameCallback { data ->
        if (data.flag == FrameData.DEPTH) {
            // Depth map
        }
        if (data.flag == FrameData.COLOR) {
            // Color map
        }
    }
    // To preview
    mCamera?.start(MYNTCamera.Source.ALL, MYNTCamera.Frame.ALL)
}

Get image data

Set image information callback
mCamera?.setFrameCallback { data ->
    if (data.flag == FrameData.DEPTH) {
        // depth image
    }
    if (data.flag == FrameData.COLOR) {
        // color image
    }
}

Get imu data

Set IMU data callback (camera with IMU model)
mCamera?.setImuCallback { data ->

    if (data.flag == ImuData.ACCELEROMETER) {
        runOnUiThread {
            accTextView.text = String.format("acc: x -> %.2f, y -> %.2f, z -> %.2f, timestamp -> %d, temperature -> %.2f", data.value[0], data.value[1], data.value[2], data.timestamp, data.temperature)
        }
    }
    if (data.flag == ImuData.GYROSCOPE) {
        runOnUiThread {
            gyroTextView.text = String.format("gyro: x -> %.2f, y -> %.2f, z -> %.2f, timestamp -> %d, temperature -> %.2f", data.value[0], data.value[1], data.value[2], data.timestamp, data.temperature)
        }
    }
}

Get rectify log data

Get internal reference
val rectifyLogData = mCamera?.rectifyLogData

Update Log

v1.3.9 - 2019-11-07

  • Add local log file

  • Fix: no color data back within no preview mode

v1.3.2 - 2019-10-11

  • Supports D-1000 (imu, usb 2.0/3.0)

  • Improve pointcloud register

  • Support get right frame

  • Add interface to get depth array converted from framedata

  • Improve preview frame

  • Fix some issue

v1.2.5 - 2019-07-01

  • Save point cloud picture locally

  • View the point cloud file

  • Fix app recovery from background, camera does not display problems

  • add frame rate display switch interface

v1.2.4 - 2019-05-27

  • Modify camera sample

  • Add deep data save read example

  • Fix crash caused by close camera

  • Fix crash caused by turning on the camera

v1.2.3 - 2019-04-10

  • support enable and disable AE

  • support enable and disable AWB

  • support synchronous get depth images & color images

  • Add the ranging method for FramData

v1.2.2 - 2019-03-28

  • fix ANR problem of plug and unplug equipment

  • add a new interface for obtaining camera internal parameter acquisition

  • SDK stability improvement

v1.2.1 - 2019-02-26

  • The D-1000 equipment supports IMU data callbacks

v1.2.0 - 2019-02-26

  • Compatible with D-1200 devices

  • Image compatible with D-1000 device (IMU | IR function temporarily unavailable)

FIRMWARE

Firmware Update

Update Main Chip Firmware

Note

This tool does not support beta device upgrade.

Get Main Chip Firmware

Latest firmware: MYNTEYE-D-1.0.6.bin Google Drive, Baidu Pan

Get Update Tool

Latest tool: eSPWriter_1.0.6.zip Google Drive, Baidu Pan

Update Firmware

Note

Please follow the steps to upgrade firmware.(Otherwise, the camera calibration parameters will be lost.)

1, Select camera device.

2, Select data type(256KB).

3, Select chip firmware.

4, Select Keep tables (in order to keep calibration parameters).

5, Click Write.

Use the tool according to diagram:

update tool

Update Auxiliary Chip Firmware

Get Auxiliary Chip Firmware

Latest firmware: MYNTEYE-D1000-auxiliary-chip-x.x.x.bin Google Drive, Baidu Pan

Compile SDK Tools
cd <sdk>  # local path of SDK
make tools
Update Firmware
./tools/_output/bin/writer/auxiliary_firmware_update <firmware-file-path>

Change Log

Auxiliary Chip Firmware

2019-08-26 v1.4.2

MYNTEYE-D-auxiliary-chip-1.4.2.bin

  1. Optimize the synchronization of image and imu

2019-07-09 v1.3.0

MYNTEYE-D-auxiliary-chip-1.3.0.bin

  1. Fixed long running ROS timestamp problem.

TOOLS SUPPORT

Calibration Tool Manual

Get Calibration Tool

Latest tool: mynteye-d-calibrator_1.0.zip Google Drive, Baidu Pan

Prerequisites(Update config file)

  • You can find Depth 50°’s config file in D1000-50 , Depth 120°’s config file in D1000-120

  • Config file in HD folder means using for 720p, VGA for 480p. You need calibrate both resolution for camera.

  • Copy and paste eDepthK.prj to mynteye-d-calibrator_1.0 folder.

  • Open eDepthK.prj with txt and modify Col1/2/3/4 to chessboard width, Row1/2/3/4 to chessboard height, Size1/2/3/4 to chessboard square size in meters.

  • Chessboard width and height refer to the number of black and white intersections in the horizontal and vertical directions of the checkerboard.

Example of 11x7 Intersection Chess Board

_images/calibration005.png

Parameters of eSPCalibrator

_images/calibration004.png

1.Open eDepthK.prj 2.Note that ‘Col1’ ‘Row1’ ‘Size1’ must match your chess board

Calibration Procedure 1 (Yoffset)

  • If you are calibrating VGA mode,please skip this procedure.

  • Calibration Process 1 need 1 picture.

  • The chess board must right in front of both camera and cover maximum portion(over 50%) of the preview image(try your best)

  • Press ‘c’ or ‘C’ to take the snapshot of the properly positioned chess board. If calibrator can not detect all the intersections on preview image, you will get “Not Found” result.

Operation guide

1.Double click mynteye-d-calibrator.exe 2.Press ‘c’ or ‘C’ to take the snapshot (total one frame)

_images/calibration001.png

Calibration Procedure 2 (Calibration)

  • Calibration need 5 pictures in 5 different angles

  • The required angles will be the combination of rotation along X and

Y axis. Each Rotation angle should be 10° to 30° and/or Y-axis around X- axis

  • The chess board must cover the maximum portion(over 50%)

of the preview image from both camera(try your best)

  • Press ‘c’ or ‘C’ to take the snap shot of the properly positioned chess board. If calibrator can not detect all the intersections on the chess board, you will get “Not Found” result.

Operation guide

_images/calibration002.png

Calibration Result

  • After calibration, parameters will auto write into device.

_images/calibration003.png
  • After caliobration, you can get Reprojection error in log file StereoSetting.txt , it is desirable to have a reprojection error of 0.2 or less. If exceeds 0.5, it needs to be recalibrated.

Log File

  • After caliobration, log file will save into Log_Folder .

_images/calibration006.png

Appendix

Error_Message : Yoffset

Error Message

Possible root cause

Yoffset Not support format.

  1. FW issue, check page.14 2. eDepthK.prj setting error

No Device

  1. USB unstable

Yoffset Cannot Preview Resolution

  1. FW issue, check page.14 2. eDepthK.prj setting error

Error_Message : Calibration

Error Message

Possible root cause

Calibration Not support format.

  1. FW issue, check page.14 2. eDepthK.prj setting error

No Device

  1. USB unstable

Calibration Cannot Preview Resolution

  1. FW issue, check page.14 2. eDepthK.prj setting error

Calibration fail : Calib_Line_Buffer_Err

  1. linebuffer > 160, quality error

Calibration fail : Calib_reproject_err

  1. reprojection err > 1.75, quality error

Calibration Write flash fail

  1. FW issue, check page.14

Error_Message : ZD

Error Message

Possible root cause

ZD initialization Fail

  1. FW issue, check page.14 2. eDepthK.prj setting error

No Device

  1. USB unstable

Cannot Preview Resolution

  1. FW issue, check page.14 2. eDepthK.prj setting error

Write ZD Table Fail

  1. FW issue, check page.14

OPEN SOURCE SUPPORT

How To Use In VINS-Mono

If you wanna run VINS-Mono with MYNT EYE camera, please follow the steps:

  1. Download MYNT-EYE-D-SDK and ROS Wrapper Installation .

  2. Follow the normal procedure to install VINS-Mono.

  3. Run mynteye_wrapper_d and VINS-Mono.

Install ROS Kinetic conveniently (if already installed, please ignore)

cd ~
wget https://raw.githubusercontent.com/oroca/oroca-ros-pkg/master/ros_install.sh && \
chmod 755 ./ros_install.sh && bash ./ros_install.sh catkin_ws kinetic

Run VINS-Mono with docker

Note

To complie with docker,we recommend that you should use more than 16G RAM, or ensure that the RAM and virtual memory space is greater than 16G.

Install docker
sudo apt-get update
sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

Tip

add your account to docker group by sudo usermod -aG docker $YOUR_USER_NAME . Relaunch the terminal or logout and re-login if you get Permission denied error.

Install MYNT-EYE-VINS-Samples
git clone -b docker_feat https://github.com/slightech/MYNT-EYE-VINS-Sample.git
cd MYNT-EYE-VINS-Sample/docker
make build
Run VINS-MONO
  1. Run mynteye node

cd MYNT-EYE-D-SDK (local path of MYNT-EYE-D-SDK)
source ./wrappers/ros/devel/setup.bash
roslaunch mynteye_wrapper_d vins_mono.launch stream_mode:=0
  1. Open another terminal to run vins-mono

cd MYNT-EYE-VINS-Sample/docker (local path of MYNT-EYE-VINS-Sample)
./run.sh mynteye_d.launch

How to Use In ORB_SLAM2

If you wanna run ORB_SLAM2 with MYNT EYE camera, please follow the steps:

  1. Download MYNT-EYE-D-SDK and ROS Wrapper Installation.

  2. Follow the normal procedure to install ORB_SLAM2.

  3. Run examples by MYNT® EYE.

Prerequisites

sudo apt-get -y install libglew-dev cmake
cd ~
git clone https://github.com/stevenlovegrove/Pangolin.git
cd Pangolin
mkdir build
cd build
cmake ..
cmake --build .
sudo make install

Building the nodes for stereo (ROS)

  • Add the path including Examples/ROS/ORB_SLAM2 to the ROS_PACKAGE_PATH environment variable. Open .bashrc file and add at the end the following line. Replace PATH by the folder where you cloned ORB_SLAM2:

export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:PATH/ORB_SLAM2/Examples/ROS
  • Execute build_ros.sh:

chmod +x build.sh
./build.sh
chmod +x build_ros.sh
./build_ros.sh
Stereo_ROS Example

Run camera mynteye_wrapper_d

cd [path of mynteye-d-sdk]
make ros
source ./wrappers/ros/devel/setup.bash
roslaunch mynteye_wrapper_d orb_slam2.launch

Open another terminal and run ORB_SLAM2

rosrun ORB_SLAM2 mynteye_d_stereo ./Vocabulary/ORBvoc.txt ./config/mynteye_d_stereo.yaml true /mynteye/left/image_mono /mynteye/right/image_mono

How To Use In OKVIS

If you wanna run OKVIS with MYNT EYE camera, please follow the steps:

  1. Download MYNT-EYE-D-SDK and ROS Wrapper Installation.

  2. Install dependencies and build MYNT-EYE-OKVIS-Sample follow the procedure of the original OKVIS.

  3. Update camera parameters to <OKVIS>/config/config_mynteye.yaml.

  4. Run OKVIS using MYNT® EYE.

Tip

OKVIS doesn’t support ARM right now.

Install MYNTEYE OKVIS

First install dependencies based on the original OKVIS, and the follow:

sudo apt-get install libgoogle-glog-dev

git clone -b mynteye https://github.com/slightech/MYNT-EYE-OKVIS-Sample.git
cd MYNT-EYE-OKVIS-Sample/
mkdir build && cd build
cmake ..
make

Get camera calibration parameters

Through the GetIntrinsics() and GetExtrinsics() function of the MYNT-EYE-D-SDK API, you can get the camera calibration parameters of the currently open device, follow the steps:

cd MYNT-EYE-D-SDK
./samples/_output/bin/get_img_params

After running the above type, pinhole’s distortion_parameters and camera parameters is obtained, and then update to here .

according to following format. It should be noted that only first four parameters of coeffs need to be filled in the distortion_coefficients.

distortion_coefficients: [coeffs]   # only first four parameters of coeffs need to be filled
focal_length: [fx, fy]
principal_point: [cx, cy]
distortion_type: radialtangential

Run MYNTEYE OKVIS

Run camera mynteye_wrapper_d

cd MYNT-EYE-D-SDK
source wrappers/ros/devel/setup.bash
roslaunch mynteye_wrapper_d mynteye.launch

Run MYNT-EYE-OKVIS-Sample open another terminal and follow the steps.

cd MYNT-EYE-OKVIS-Sample/build
source devel/setup.bash
roslaunch okvis_ros mynteye_d.launch

And use rviz to display

cd ~/catkin_okvis/src/MYNT-EYE-OKVIS-Sample/config
rosrun rviz rviz -d rviz.rvi

How To Use In VIORB

If you wanna run VIORB with MYNT® EYE,please follow the steps:

  1. Download MYNT-EYE-D-SDK and ROS Wrapper Installation.

  2. Follow the normal procedure to install VIORB.

  3. Update camera parameters to <VIO>/config/mynteye_d.yaml.

  4. Run mynteye_wrapper_d and VIORB.

Install MYNT-EYE-VIORB-Sample.

git clone -b mynteye https://github.com/slightech/MYNT-EYE-VIORB-Sample.git
cd MYNT-EYE-VIORB-Sample

ROS_PACKAGE_PATH environment variable. Open .bashrc file and add at the end the following line. Replace PATH by the folder where you cloned MYNT-EYE-VIORB-Sample:

export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:PATH/Examples/ROS/ORB_VIO

Execute:

cd MYNT-EYE-VIORB-Sample
./build.sh

Get image calibration parameters

Assume that the left eye of the mynteye camera is used with IMU. Through the GetIntrinsics() and GetExtrinsics() function of the MYNT-EYE-D-SDK API, you can get the image calibration parameters of the currently open device:

cd MYNT-EYE-S-SDK
./samples/_output/bin/get_img_params

After running the above type, pinhole’s distortion_parameters and projection_parameters is obtained, and then update to <MYNT-EYE-VIORB-Sample>/config/mynteye_d.yaml.

Run VIORB and mynteye_wrapper_d

  1. Launch mynteye node

roslaunch mynteye_wrapper_d mynteye.launch
  1. Open another terminal and run viorb

roslaunch ORB_VIO testmynteye_d.launch

Finally, pyplotscripts can be used to visualize some results.

How To Use In VINS-Fusion

If you wanna run VINS-Fusion with MYNT EYE camera, please follow the steps:

  1. Download MYNT-EYE-D-SDK and ROS Wrapper Installation .

  2. Follow the normal procedure to install VINS-Fusion .

  3. Run mynteye_wrapper_d and VINS-Fusion .

Install ROS Kinetic conveniently (if already installed, please ignore)

cd ~
wget https://raw.githubusercontent.com/oroca/oroca-ros-pkg/master/ros_install.sh && \
chmod 755 ./ros_install.sh && bash ./ros_install.sh catkin_ws kinetic

Run VINS-FUSION with docker

Note

To complie with docker,we recommend that you should use more than 16G RAM, or ensure that the RAM and virtual memory space is greater than 16G.

Install docker
sudo apt-get update
sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

Tip

add your account to docker group by sudo usermod -aG docker $YOUR_USER_NAME . Relaunch the terminal or logout and re-login if you get Permission denied error.

Install MYNT-EYE-VINS-FUSION-Samples
git clone -b docker_feat https://github.com/slightech/MYNT-EYE-VINS-FUSION-Samples.git
cd MYNT-EYE-VINS-FUSION-Sample/docker
make build
Run VINS_FUSION
  1. Run mynteye node

cd MYNT-EYE-D-SDK (local path of MYNT-EYE-D-SDK)
source ./wrappers/ros/devel/setup.bash
roslaunch mynteye_wrapper_d vins_fusion.launch stream_mode:=1 # stereo camera with 640x480
  1. Open another terminal to run vins-fusion

cd MYNT-EYE-VINS-FUSION-Sample/docker (local path of MYNT-EYE-VINS-FUSION-Sample)
./run.sh mynteye-d/mynt_mono_config.yaml  # mono+imu fusion
# ./run.sh mynteye-d/mynt_stereo_config.yaml  # Stereo fusion
# ./run.sh mynteye-d/mynt_stereo_imu_config.yaml # Stereo+imu fusion

API DOCS

Camera

class Camera

Public Functions

std::vector<DeviceInfo> GetDeviceInfos() const

Get all device infos.

void GetDeviceInfos(std::vector<DeviceInfo> *dev_infos) const

Get all device infos.

void GetStreamInfos(const std::int32_t &dev_index, std::vector<StreamInfo> *color_infos, std::vector<StreamInfo> *depth_infos) const

Get all stream infos.

ErrorCode Open()

Open camera.

ErrorCode Open(const OpenParams &params)

Open camera with params.

bool IsOpened() const

Whethor camera is opened or not.

OpenParams GetOpenParams() const

Get open params.

std::shared_ptr<device::Descriptors> GetDescriptors() const

Get all device descriptors.

std::string GetDescriptor(const Descriptor &desc) const

Get one device descriptor.

StreamIntrinsics GetStreamIntrinsics(const StreamMode &stream_mode) const

Get the intrinsics of camera.

StreamIntrinsics GetStreamIntrinsics(const StreamMode &stream_mode, bool *ok) const

Get the intrinsics of camera.

StreamExtrinsics GetStreamExtrinsics(const StreamMode &stream_mode) const

Get the extrinsics of camera.

StreamExtrinsics GetStreamExtrinsics(const StreamMode &stream_mode, bool *ok) const

Get the extrinsics of camera.

bool WriteCameraCalibrationBinFile(const std::string &filename)

Write camera calibration bin file.

MotionIntrinsics GetMotionIntrinsics() const

Get the intrinsics of motion.

MotionIntrinsics GetMotionIntrinsics(bool *ok) const

Get the intrinsics of motion.

MotionExtrinsics GetMotionExtrinsics() const

Get the extrinsics from left to motion.

MotionExtrinsics GetMotionExtrinsics(bool *ok) const

Get the extrinsics from left to motion.

bool IsWriteDeviceSupported() const

Whethor write device supported or not.

bool WriteDeviceFlash(device::Descriptors *desc, device::ImuParams *imu_params, Version *spec_version = nullptr)

Write device flash.

void EnableProcessMode(const ProcessMode &mode)

Enable process mode, e.g.

imu assembly, temp_drift

void EnableProcessMode(const std::int32_t &mode)

Enable process mode, e.g.

imu assembly, temp_drift

bool IsImageInfoSupported() const

Whethor image info supported or not.

void EnableImageInfo(bool sync)

Enable image infos.

If sync is false, indicates only can get infos from callback. If sync is true, indicates can get infos from callback or access it from StreamData.

void DisableImageInfo()

Disable image info.

bool IsImageInfoEnabled() const

Whethor image info enabled or not.

bool IsImageInfoSynced() const

Whethor image info synced or not.

bool IsStreamDataEnabled(const ImageType &type) const

Whethor stream data of certain image type enabled or not.

bool HasStreamDataEnabled() const

Has any stream data enabled.

StreamData GetStreamData(const ImageType &type)

Get latest stream data.

std::vector<StreamData> GetStreamDatas(const ImageType &type)

Get cached stream datas.

bool IsMotionDatasSupported() const

Whethor motion datas supported or not.

void EnableMotionDatas(std::size_t max_size = std::numeric_limits<std::size_t>::max())

Enable motion datas.

If max_size <= 0, indicates only can get datas from callback. If max_size > 0, indicates can get datas from callback or using GetMotionDatas().

Note: if max_size > 0, the motion datas will be cached until you call GetMotionDatas().

void DisableMotionDatas()

Disable motion datas.

bool IsMotionDatasEnabled() const

Whethor motion datas enabled or not.

std::vector<MotionData> GetMotionDatas()

Get cached motion datas.

Besides, you can also get them from callback

void SetImgInfoCallback(img_info_callback_t callback, bool async = true)

Set image info callback.

void SetStreamCallback(const ImageType &type, stream_callback_t callback, bool async = true)

Set stream data callback.

void SetMotionCallback(motion_callback_t callback, bool async = true)

Set motion data callback.

void Close()

Close the camera.

void SetExposureTime(const float &value)

Set exposure time [1ms - 655ms] value exposure time value.

void GetExposureTime(float &value)

Get exposure time value return exposure time value.

void SetGlobalGain(const float &value)

Set global gain [1 - 16] value global gain value.

void GetGlobalGain(float &value)

Get global gain value return global gain value.

void SetIRIntensity(const std::uint16_t &value)

set infrared(IR) intensity [0, 10] default 4

bool AutoExposureControl(bool enable)

Auto-exposure enabled or not default enabled.

bool AutoWhiteBalanceControl(bool enable)

Auto-white-balance enabled or not default enabled.

bool IsLocationDatasSupported() const

Whethor location datas supported or not.

void EnableLocationDatas(std::size_t max_size = std::numeric_limits<std::size_t>::max())

↩ Enable location datas.

If max_size <= 0, indicates only can get datas from callback. If max_size > 0, indicates can get datas from callback or using GetLocationDatas().

Note: if max_size > 0, the distance datas will be cached until you call GetLocationDatas().

void DisableLocationDatas()

Disable location datas.

bool IsLocationDatasEnabled() const

Whethor location datas enabled or not.

std::vector<LocationData> GetLocationDatas()

Get cached location datas.

Besides, you can also get them from callback

void SetLocationCallback(location_callback_t callback, bool async = true)

Set location data callback.

bool IsDistanceDatasSupported() const

Whethor distance datas supported or not.

void EnableDistanceDatas(std::size_t max_size = std::numeric_limits<std::size_t>::max())

Enable distance datas.

If max_size <= 0, indicates only can get datas from callback. If max_size > 0, indicates can get datas from callback or using GetDistanceDatas().

Note: if max_size > 0, the distance datas will be cached until you call GetDistanceDatas().

void DisableDistanceDatas()

Disable distance datas.

bool IsDistanceDatasEnabled() const

Whethor distance datas enabled or not.

std::vector<DistanceData> GetDistanceDatas()

Get cached distance datas.

Besides, you can also get them from callback

void SetDistanceCallback(distance_callback_t callback, bool async = true)

Set distance data callback.

bool AuxiliaryChipFirmwareUpdate(const char *filepath)

Update auxiliary chip firmware.

void ControlReconnectStatus(const bool &status)

control status of reconnect default enabled

Device

DeviceInfo

struct DeviceInfo

Device information.

Public Members

std::int32_t index

The device index.

std::string name

The device name.

std::uint16_t type

The device type.

std::uint16_t pid

The product id.

std::uint16_t vid

The vendor id.

std::uint16_t chip_id

The chip id.

std::string fw_version

The firmware version.

std::string sn

The serial number.

Image

class Image

Subclassed by mynteyed::ImageColor, mynteyed::ImageDepth

OpenParams

struct OpenParams

Device open parameters.

Public Functions

OpenParams()

Constructor.

~OpenParams()

Destructor.

Public Members

std::int32_t dev_index

Device index.

std::int32_t framerate

Framerate, range [0,60], [0,30](STREAM_2560x720), default 10.

DeviceMode dev_mode

Device mode, default DEVICE_ALL.

  • DEVICE_COLOR: IMAGE_LEFT_COLOR y IMAGE_RIGHT_COLOR - IMAGE_DEPTH n

  • DEVICE_DEPTH: IMAGE_LEFT_COLOR n IMAGE_RIGHT_COLOR n IMAGE_DEPTH y

  • DEVICE_ALL: IMAGE_LEFT_COLOR y IMAGE_RIGHT_COLOR - IMAGE_DEPTH y

Could detect image type is enabled after opened through Camera::IsStreamDataEnabled().

Note: y: available, n: unavailable, -: depends on stream_mode

ColorMode color_mode

Color mode, default COLOR_RAW.

DepthMode depth_mode

Depth mode, default DEPTH_COLORFUL.

StreamMode stream_mode

Stream mode of color & depth, default STREAM_1280x720.

StreamFormat color_stream_format

Stream format of color, default STREAM_YUYV.

StreamFormat depth_stream_format

Stream format of depth, default STREAM_YUYV.

bool state_ae

Auto-exposure, default true.

bool state_awb

Auto-white balance, default true.

std::uint8_t ir_intensity

IR (Infrared), range [0,10], default 0.

bool ir_depth_only

IR Depth Only mode, default false.

Note: When frame rate less than 30fps, IR Depth Only will be not available.

float colour_depth_value

Colour depth image, default 5000.

[0, 16384]

StreamInfo

struct StreamInfo

Stream information.

Public Members

std::int32_t index

The stream index.

std::int32_t width

The stream width.

std::int32_t height

The stream height.

StreamFormat format

The stream format.

Enums

ErrorCode

enum mynteyed::ErrorCode

List error codes.

Values:

SUCCESS = 0

Standard code for successful behavior.

ERROR_FAILURE

Standard code for unsuccessful behavior.

ERROR_FILE_OPEN_FAILED

File cannot be opened for not exist, not a regular file or any other reason.

ERROR_CAMERA_OPEN_FAILED

Camera cannot be opened for not plugged or any other reason.

ERROR_CAMERA_NOT_OPENED

Camera is not opened now.

ERROR_CAMERA_RETRIEVE_FAILED

Camera retrieve the image failed.

ERROR_IMU_OPEN_FAILED

Imu cannot be opened for not plugged or any other reason.

ERROR_IMU_RECV_TIMEOUT

Imu receive data timeout.

ERROR_IMU_DATA_ERROR

Imu receive data error.

ERROR_CODE_LAST

Last guard.

Descriptor

enum mynteyed::Descriptor

The descriptor fields.

Values:

DEVICE_NAME

Device name.

SERIAL_NUMBER

Serial number.

FIRMWARE_VERSION

Firmware version.

HARDWARE_VERSION

Hardware version.

SPEC_VERSION

Spec version.

LENS_TYPE

Lens type.

IMU_TYPE

IMU type.

NOMINAL_BASELINE

Nominal baseline.

DESC_LAST

Last guard.

ProcessMode

enum mynteyed::ProcessMode

Process modes.

Values:

PROC_NONE = 0
PROC_IMU_ASSEMBLY = 1
PROC_IMU_TEMP_DRIFT = 2
PROC_IMU_ALL = PROC_IMU_ASSEMBLY | PROC_IMU_TEMP_DRIFT

DeviceMode

enum mynteyed::DeviceMode

List device modes.

Control the color & depth streams enabled or not.

Note: y: available, n: unavailable, -: depends on StreamMode

Values:

DEVICE_COLOR = 0

IMAGE_LEFT_COLOR y IMAGE_RIGHT_COLOR - IMAGE_DEPTH n.

DEVICE_DEPTH = 1

IMAGE_LEFT_COLOR n IMAGE_RIGHT_COLOR n IMAGE_DEPTH y.

DEVICE_ALL = 2

IMAGE_LEFT_COLOR y IMAGE_RIGHT_COLOR - IMAGE_DEPTH y.

ColorMode

enum mynteyed::ColorMode

List color modes.

Values:

COLOR_RAW = 0

color raw

COLOR_RECTIFIED = 1

color rectified

COLOR_MODE_LAST

DepthMode

enum mynteyed::DepthMode

List depth modes.

Values:

DEPTH_RAW = 0

ImageFormat::DEPTH_RAW.

DEPTH_GRAY = 1

ImageFormat::DEPTH_GRAY_24.

DEPTH_COLORFUL = 2

ImageFormat::DEPTH_RGB.

DEPTH_MODE_LAST

StreamMode

enum mynteyed::StreamMode

List stream modes.

Values:

STREAM_640x480 = 0

480p, vga, left

STREAM_1280x480 = 1

480p, vga, left+right

STREAM_1280x720 = 2

720p, hd, left

STREAM_2560x720 = 3

720p, hd, left+right

STREAM_MODE_LAST

StreamFormat

enum mynteyed::StreamFormat

List stream formats.

Values:

STREAM_MJPG = 0
STREAM_YUYV = 1
STREAM_FORMAT_LAST

ImageType

enum mynteyed::ImageType

List image types.

Values:

IMAGE_LEFT_COLOR

LEFT Color.

IMAGE_RIGHT_COLOR

RIGHT Color.

IMAGE_DEPTH

Depth.

IMAGE_ALL

All.

ImageFormat

enum mynteyed::ImageFormat

List image formats.

Values:

IMAGE_BGR_24

8UC3

IMAGE_RGB_24

8UC3

IMAGE_GRAY_8

8UC1

IMAGE_GRAY_16

16UC1

IMAGE_GRAY_24

8UC3

IMAGE_YUYV

8UC2

IMAGE_MJPG
COLOR_BGR = IMAGE_BGR_24
COLOR_RGB = IMAGE_RGB_24
COLOR_YUYV = IMAGE_YUYV
COLOR_MJPG = IMAGE_MJPG
DEPTH_RAW = IMAGE_GRAY_16
DEPTH_GRAY = IMAGE_GRAY_8
DEPTH_GRAY_24 = IMAGE_GRAY_24
DEPTH_BGR = IMAGE_BGR_24
DEPTH_RGB = IMAGE_RGB_24
IMAGE_FORMAT_LAST

Last guard.

SensorType

enum mynteyed::SensorType

SensorType types.

Values:

SENSOR_TYPE_H22 = 0
SENSOR_TYPE_OV7740
SENSOR_TYPE_AR0134
SENSOR_TYPE_AR0135
SENSOR_TYPE_OV9714

SensorMode

enum mynteyed::SensorMode

SensorMode modes.

Values:

LEFT = 0
RIGHT
ALL

Types

Data

ImgInfo
struct ImgInfo

Image information.

Public Members

std::uint16_t frame_id

Image frame id.

std::uint32_t timestamp

Image timestamp.

std::uint16_t exposure_time

Image exposure time.

ImuData
struct ImuData

Imu data.

Public Members

std::uint8_t flag

Data type MYNTEYE_IMU_ACCEL: accelerometer MYNTEYE_IMU_GYRO: gyroscope.

std::uint64_t timestamp

Imu gyroscope or accelerometer or frame timestamp.

double temperature

temperature

double accel[3]

Imu accelerometer data for 3-axis: X, Y, X.

double gyro[3]

Imu gyroscope data for 3-axis: X, Y, Z.

StreamData
struct StreamData

Stream data.

Public Members

std::shared_ptr<Image> img

Image data.

std::shared_ptr<ImgInfo> img_info

Image information.

MotionData
struct MotionData

Motion data.

Public Members

std::shared_ptr<ImuData> imu

ImuData.

Calib

CameraIntrinsics
struct CameraIntrinsics

Camera intrinsics: size, coeffs and camera matrix.

Public Members

std::uint16_t width

The width of the image in pixels.

std::uint16_t height

The height of the image in pixels.

double fx

The focal length of the image plane, as a multiple of pixel width.

double fy

The focal length of the image plane, as a multiple of pixel height.

double cx

The horizontal coordinate of the principal point of the image.

double cy

The vertical coordinate of the principal point of the image.

double coeffs[5]

The distortion coefficients: k1,k2,p1,p2,k3.

double p[12]

3x4 projection matrix in the (rectified) coordinate systems left: fx’ cx’ fy’ cy’ 1 right: fx’ cx’ tx fy’ cy’ 1

double r[9]

3x3 rectification transform (rotation matrix) for the left camera.

StreamIntrinsics
struct StreamIntrinsics

Camera intrinsics: size, coeffs and camera matrix.

ImuIntrinsics
struct ImuIntrinsics

IMU intrinsics: scale, drift and variances.

Public Members

double scale[3][3]

Scale matrix.

Scale X     cross axis  cross axis
cross axis  Scale Y     cross axis
cross axis  cross axis  Scale Z

double assembly[3][3]

Assembly error [3][3].

double noise[3]

Noise density variances.

double bias[3]

Random walk variances.

double x[2]

Temperature drift.

0 - Constant value
1 - Slope

MotionIntrinsics
struct MotionIntrinsics

Motion intrinsics, including accelerometer and gyroscope.

Public Members

ImuIntrinsics accel

Accelerometer intrinsics.

ImuIntrinsics gyro

Gyroscope intrinsics.

Extrinsics
struct Extrinsics

Extrinsics, represent how the different datas are connected.

Public Functions

Extrinsics Inverse() const

Inverse this extrinsics.

Return

the inversed extrinsics.

Public Members

double rotation[3][3]

Rotation matrix left camera to right camera.

double translation[3]

Translation vector left camera to right camera.

Utils

select

bool mynteyed::util::select(const Camera &cam, DeviceInfo *info)

is_right_color_supported

bool mynteyed::util::is_right_color_supported(const StreamMode &mode)

Android API DOCS

USBMonitor

Register listener

public void register()

Logout listener

public void unregister()

Release listener

public void destroy()

MYNTCamera

Whethor camera is connected or not.

public boolean isConnected()

Whether preview function is started or not .

public boolean isStart()

Whethor camera is opened or not.

public boolean isOpen()

Whether the version of phone device’s USB interface is UBS 3.0 or not.

public boolean getIsUSB3()

Whether IMU is supported or not.

public boolean isIMUSupported()

Get device’s serial number .

public String getSerialNumber()

Get device’s name.

public String getName()

Get device’s type.

public int getCameraType()

Set callback of camera’s connection

public void setCameraListener(ICameraListener callback)

Set callback of IMU info (device with IMU)

public void setImuCallback(IIMUCallback callback)

Set callback of image info

public void setFrameCallback(IFrameCallback callback)

Connect camera

public void connect()

Open camera.

public int open()

Close camera

public void close()

Start to preview(IMU / VIDEO / ALL)

public boolean start(Source source, Frame frame)

Release camera

public void destroy()

Set the depth of camera (8bit / 11bit)

public void setDepthType(short depthType)

Get the type of camera’s depth.

public short getDepthType()

Set preview resolution (480 / 720).

public void setPreviewSize(int height)

Get width of preview resolution .

public int getPreviewWidth()

Get height of preview resolution.

public int getPreviewHeight()

Get Surface for preview

public void setPreviewDisplay(Surface surface, Frame frame)

Get UVC FPS

public double getUVCFPS(Frame frame)

Get preview FPS

public double getPreviewFPS(Frame frame)

Get internal parameters of the camera

public RectifyLogData getRectifyLogData()

Whether IR is supported or not .

public boolean isIRSupported()

Set value of IR

public int setIRCurrentValue(int value)

Set the color mode of preview .

public void setColorMode(ColorFrame colorFrame)

Get current color data of camera.

public FrameData getColorFrameData()

Get current depth data of camera.

public FrameData getDepthFrameData()

Get current IR value.

public int getIRCurrentValue()

Get min value which IR support.

public int getIRMinValue()

Get max value which IR support.

public int getIRMaxValue()

Convert the subscript corresponding to pixel points into distance information (unit mm)

public int getDistanceValue(int index)

Whether AE is enable or not.

public boolean getAEStatusEnabled()

Enable AE function.

public void setEnableAE()

Disable AE function.

public void setDisableAE()

Whether AWB is enable or not.

public boolean getAWBStatusEnabled()

Enable AWB function.

public void setEnableAWB()

Disable AWB function.

public void setDisableAWB()

Enable or disable to display frame fps.

public void setEnableFrameFPS(boolean enable, int camera_switch)

Sava point cloud data within the distance.

public void savePointCloud(final FrameData colorFrameData,
                           final FrameData depthFrameData,
                           final String filePath,
                           Boolean hasColor,
                           int distance)

ImuData (Device with Imu )

Data Type

/**
 * Data type
 *
 * 1: accelerometer
 * 2: gyroscope
 *
 * */
public int flag;

Timestamp

public long timestamp;

Temperature

public double temperature;

Data

/**
 * Imu accelerometer data for 3-axis: X, Y, X.
 * Imu gyroscope data for 3-axis: X, Y, Z.
 *
 * */
public double value[];

FrameData

Data type

/**
 *
 * FrameData.COLOR
 * FrameData.DEPTH
 *
 * */
public int flag;

TimeStamp

public int frameId;

Image width

public int width;

Image height

public int height;

Color frame type(Left / Left && Right)

public ColorFrame colorMode;

Frame format

/**
 * MYNTCamera.FRAME_FORMAT_YUYV
 * MYNTCamera.FRAME_FORMAT_MJPEG
 * MYNTCamera.PIXEL_FORMAT_RGBX
 *
 * */
public int type;

Depth image type

/**
 * MYNTCamera.DEPTH_DATA_11_BITS
 * MYNTCamera.DEPTH_DATA_8_BITS
 *
 * */
public int depthType;

Get bitmap

public Bitmap convert2Bitmap(byte[] bytes, int width, int height)

Get data from left camera

public byte[] getLeftBytes()

Get data from right camera

public byte[] getRightBytes()

Get distance array(only the flag is “DEPTH”)

public int[] getDistanceInts()

Get distance array(only the flag is “DEPTH”)

/**
 * get distance table(int)
 *
 * @param max   Max(mm), if more than  max, go to be 0.
 *
 * */
public int[] getDistanceInts(int max)

Get distance array(only the flag is “DEPTH”)

/**
 * get distance table(int)
 *
 * @param min   Min(mm)
 * @param max   Max(mm), if more than  max, go to be 0.
 *
 * */
public int[] getDistanceInts(int min, int max)

Get distance array(only the flag is “DEPTH”)

public byte[] getDistanceShorts()

Get distance array(only the flag is “DEPTH”)

/**
 * get distance table(int)
 *
 * @param max   Max(mm), if more than  max, go to be 0.
 *
 * */
public byte[] getDistanceShorts(int max)

Get distance array(only the flag is “DEPTH”)

/**
 * get distance table(int)
 *
 * @param min   Min(mm)
 * @param max   Max(mm), if more than  max, go to be 0.
 *
 * */
public byte[] getDistanceShorts(int min, int max)

Get distance(only the flag is “DEPTH”)

public int getDistanceValue(int index)

Get distance(only the flag is “DEPTH”)

public int getDistanceValue(int x, int y)

TECHNICAL SUPPORT

FAQ

If you have problems using MYNT EYE, please first check the following docs. http://support.myntai.com/hc/

Contact Us

If you continue to encounter problems, please contact customer service. http://support.myntai.com/hc/request/new/