Parrot ANAFI Ai 4G Robotic UAV Instruction Manual

June 3, 2024
Parrot

ANAFI Ai 4G Robotic UAV

The 4G robotic UAV

Cover illustration: a computational fluid dynamic (CFD) view of ANAFI Ai, showing the static pressure distribution on the drone (right of the image), and highlighting the air vortices around the
drone, at full forward speed (left of the image).

2

v1.6

Table of contents
A naturalist remark…………………………………………………………………………………………………………………… 5 Flight performances ………………………………………………………………………………………………………………… 6
Key features ………………………………………………………………………………………………………………………………………………………..6 A cool drone ………………………………………………………………………………………………………………………………………………………..6 Aerodynamic performances …………………………………………………………………………………………………………………………..6 Actuators and aerodynamics…………………………………………………………………………………………………………………………. 7 Sensors ………………………………………………………………………………………………………………………………………………………………… 8 Autopilot …………………………………………………………………………………………………………………………………………………………….. 10 Autonomous flight …………………………………………………………………………………………………………………………………………… 12 Flight missions…………………………………………………………………………………………………………………………………………………. 23 Hybrid stabilization …………………………………………………………………………………………………………………………………………30 292° tilt range …………………………………………………………………………………………………………………………………………………… 31 4G Connectivity ……………………………………………………………………………………………………………………… 32 Key features …………………………………………………………………………………………………………………………………………………….. 32 4G ……………………………………………………………………………………………………………………………………………………………………….. 32 Wi-Fi …………………………………………………………………………………………………………………………………………………………………… 34 Video streaming ……………………………………………………………………………………………………………………………………………… 35 Camera……………………………………………………………………………………………………………………………………. 37 Key features ………………………………………………………………………………………………………………………………………………………37 48 MP sensor ……………………………………………………………………………………………………………………………………………………37 Lens design ……………………………………………………………………………………………………………………………………………………….37 Video modes …………………………………………………………………………………………………………………………………………………….38 HDR ……………………………………………………………………………………………………………………………………………………………………. 39 Photo modes…………………………………………………………………………………………………………………………………………………….39 Settings………………………………………………………………………………………………………………………………………………………………. 41 6x zoom …………………………………………………………………………………………………………………………………………………………….. 42 Photogrammetry ……………………………………………………………………………………………………………………. 43 Key features …………………………………………………………………………………………………………………………………………………….. 43 Designed for inspection and mapping ……………………………………………………………………………………………………… 44 Harness the power of AI and 4G ………………………………………………………………………………………………………………… 46 Photogrammetry or LIDAR?………………………………………………………………………………………………………………………… 52

ANAFI Ai ­ © Parrot 2021

3

Partner Ecosystem…………………………………………………………………………………………………………………. 53 Software Development Kit …………………………………………………………………………………………………….. 57
Key features ………………………………………………………………………………………………………………………………………………………57 Skycontroller 4 ………………………………………………………………………………………………………………………. 68
Key features ……………………………………………………………………………………………………………………………………………………..68 Operations ……………………………………………………………………………………………………………………………………………………69 Design ……………………………………………………………………………………………………………………………………………………………. 69 Connectors …………………………………………………………………………………………………………………………………………………..69 Connectivity…………………………………………………………………………………………………………………………………………………. 69 HDMI ………………………………………………………………………………………………………………………………………………………………69 Battery …………………………………………………………………………………………………………………………………………………………… 69 Durability ……………………………………………………………………………………………………………………………………………………….69 Performances ……………………………………………………………………………………………………………………………………………… 70 Quality …………………………………………………………………………………………………………………………………………………………… 70
Smart battery…………………………………………………………………………………………………………………………… 71 Key features ……………………………………………………………………………………………………………………………………………………….71 Performances…………………………………………………………………………………………………………………………………………………….71 Functions ……………………………………………………………………………………………………………………………………………………………72 Quality …………………………………………………………………………………………………………………………………………………………………73
Cybersecurity by design ………………………………………………………………………………………………………… 74 Key features ………………………………………………………………………………………………………………………………………………………74 No data shared by default ……………………………………………………………………………………………………………………………..74 FIPS140-2 compliant and CC EAL5+ certified Secure Element…………………………………………………………74 4G secure pairing and strong authentication …………………………………………………………………………………………..76 Secure initialization and update …………………………………………………………………………………………………………………..76 Configuring user keys on the Secure Element ………………………………………………………………………………………..77 Digitally signed pictures …………………………………………………………………………………………………………………………………77 Transparency and Bug bounty continuous security check………………………………………………………………….77
Data management………………………………………………………………………………………………………………….. 79 Data collected …………………………………………………………………………………………………………………………………………………..79 Final use of collected data…………………………………………………………………………………………………………………………….80
Pack ………………………………………………………………………………………………………………………………………… 82 Pack contents…………………………………………………………………………………………………………………………………………………..82
Appendix: certification documents……………………………………………………………………………………….. 83

4

v1.6

A naturalist remark
In the animal kingdom, there is no known species that has optical sensors distributed around its body.
No insect, bird, fish, or mammal has developed a `visual perception system’ of sensors, despite each species needing to detect obstacles or predators in all directions in order to survive.
Natural evolution has led to the most widespread and efficient solution. For example:
– mobile heads that can be oriented on 3 axes: left/right (yaw), up/down (pitch), and also to a lesser extent with respect to the horizon (roll);
– this integrates a unique pair of sensors adapted to the animal’s behavior, and installed on a vertical plane of the face for tree-dwelling primates from which we descend, or directed more laterally for equids for example;
– the pair of sensors are generally mobile on 2 axes: left/right (yaw axis) and up/down (pitch axis).
This visual perception is created of a pair of sensors (the eyes) mobile on 5 free axes (the movements of the head and eyes in their orbits).
In biology, we also see this in the evolution of the optic nerve. For most species, the optic nerve is the largest of its kind; it transmits large amounts of information across the body to its cerebral support. It is also a very short nerve made up of a bundle of cerebral fibers. When we apply this to a drone, the link (the “bus”) between the sensor and the processor requires an essential exchange of information, which implies optimizing the length of the bus.
Anatomically, the head of some species is often detached from the rest of the body. Some flying species (insects, birds, mammals) have their head located at the front of its frame, extending the view of the rest of the body – especially its wings. The placement of its head also allows its eyes to be positioned in such a way as to have an excellent field of view in all directions, and turning the head allows most flying species to see precisely behind them.
It is from this system of visual perception and cerebral support emerges the cognitive functions – in this case the perceptive and conceptual functions.
With this is in mind, Parrot has designed the ANAFI Ai obstacle avoidance system.
Henri Seydoux
Founder and CEO, Parrot Drones

ANAFI Ai ­ © Parrot 2021

5

Flight performances
Key features
– 32-minute flights – Best-in-class input power/output power ratio (propellers: 66 % figure of merit) – Designed for outdoor missions as well as indoor (warehouses, tunnels) – Rotating twin sensor for obstacle avoidance – IPX3: withstands rain and 45 km/h winds
A cool drone
By their design, ANAFI Ai and its Skycontroller 4 are a milestone for the UAV industry: the controller does not require cumbersome cables anymore, and it enables the use of an iPad mini or all large smartphones. It also features a HDMI output, which allows streaming videos from ANAFI Ai.
ANAFI Ai is Parrot’s new cool drone: it looks nice, it is intriguing, foldable, operational in 1 minute and resistant to rain.

Aerodynamic performances

Maximum speed

17 m/s forward ­ 16 m/s backward and laterally

Wind resistance

12.7 m/s

Flight time

32 min

Max Climbing Speed

4 m/s

Max Descent Speed

3 m/s

Max practical ceiling above sea level 5,000 m

Range

22.5 km at 14 m/s without wind

Max Angular Speed

300°/s on pitch and roll axes & 200°/s on yaw axis

6

v1.6

Actuators and aerodynamics

CFD view of an ANAFI Ai propeller blades

Designed by biomimicry, the new bio-inspired propeller blades use a leading edge similar to the shape of the fins of humpback whales.
This allows a better propulsive efficiency. For the same rotation speed, the thrust is increased. The effect is comparable to an enlargement of the rotor diameter.
Propellers achieve a Figure of Merit of 66%. The Figure of Merit is used as a performance indicator of a propeller design, it can be seen as an equivalent of the efficiency. Efficiency is defined by the ratio between input power and output power, whereas on a drone propeller we use the figure of merit (the ratio between the input power and an “ideal power”) which is a theoretical maximum value of power provided by a propeller of a given size.
Acoustic noise is also reduced, especially on the tonal noise coming from the leading edge. Therefore, ANAFI Ai is quieter [71.5 dBSPL (A) at 1 m] than the Skydio 2 [76.4 dBSP (A) at 1 m].
· Flight time of more than 32 minutes. · Top speed of 17 m/s (61 km/h) in forward flight and 16 m/s (58 km/h) in lateral and backward
flight, thanks to the optimized aerodynamic performance of the body and the ANAFI Ai powerplant. · Wind resistance is 12.7 m/s (45 km/h). · 22.5 km range at 14 m/s without wind thanks to the high efficiency of the motor/propeller torque and the high autonomy of the battery.
Following is a list of aerodynamic simulations performed in the development phase.
· Parrot designs the conversion chain using digital multiphysics simulation tools. These tools and the Matlab numerical models developed in-house enable us to determine the drone’s performance for a given size, mass and power consumption, right from the specification phase.
· The numerical models allow us to dimension the UAV according to the customer’s needs. · Modeling of the propulsion unit (CFD). · Modeling of the drone body (CFD).

ANAFI Ai ­ © Parrot 2021

7

CFD views of ANAFI Ai arms
Test in development phase:
· Tests carried out on the bench of the propulsion unit (engine, propellers), · Wind tunnel tests to determine aerodynamic performance (body, propellers), · Production test:
o All the sub-assemblies making up the UAV are checked on the production bench. o Every drone undergoes a flight test before it is packed: take-off, hovering, landing.
Sensors
To ensure its safe flight, ANAFI Ai is equipped with:
– 2 IMU (an ICM-40609-D and an ICM42605) – LIS2MDL magnetometer – UBX-M8030 GPS – TI OPT3101 time-of-flight (ToF) – LPS22HB barometer – vertical camera
The ToF is responsible for providing a ground distance measurement and consists of an infrared transmitter-receiver measuring the drone-object distance according to the round-trip time of a light emission between the object and the onboard sensor.
The ToF has the following advantages compared to the ultrasound:
– Smaller minimum measurable distance: 10 cm versus 40 cm for the ultrasound – Higher maximum measurable distance: 9 m for a target with IR reflectivity of 80%, against
5 m for ultrasound on concrete. – No disturbance from other close-by drones
Several calibrations are applied to the ToF to ensure the reliability of its measures:
– Crosstalk correction on each individual drone (between diode and photodiode) – Distance offset on each individual drone (based on a known distance) – Temperature sensitivity (compensation by a distance offset dependent on the
temperature) – Ambient IR light sensitivity (compensation by an offset dependent on the light intensity)

8

v1.6

ANAFI Ai is also equipped with a multidirectional depth sensing system (stereo vision) described in the “Autonomous Flight” section.
Sensors’ characteristics Flight IMU: ICM-40609-D
– 3 axes gyroscope – Range: ± 2000 °/s – Resolution: 16.4 LSB/°/s – Bias/accuracy: ± 0.05°/s (after thermal and dynamic calibration) – 3 axes accelerometer – Range: ± 16g, – Resolution: 2.048 LSB/mg – Bias/accuracy: ± 0.5 mg (X-Y) ± 1 mg (Z) (after thermal and dynamic calibration) – Temperature regulation: controlled heating system in relation to the ambient temperature,
stabilized within: ± 0.15 ° C – Measurement frequency: 2 kHz
Magnetometer: LIS2MDL – Range: ± 49.152 G – Resolution: 1.5 mG – Bias/accuracy: ± 15 mG (after compensation, at maximum motor speed) – Measurement frequency: 100 Hz
Barometer: LPS22HB 1 – Range: 260 to 1260 hPa – Resolution: 0.0002 hPa – Bias/accuracy: ± 0.1 hPa – Temperature regulation: controlled heating system in relation to the ambient temperature, stabilized within: ± 0.2 ° C – Frequency of measurement: 75 Hz – Measurement noise: 20 cm RMS
GNSS: UBX-M8030 1 – Ceramic patch antenna of 25x25x4mm allowing a +2dB gain improvement compared to ANAFI 1 – Sensitivity: cold start -148 dBm / tracking & navigation: -167 dBm – Time-To-First-Fix: 40 seconds – Bias/accuracy o Position: standard deviation 1.4 m o Speed: standard deviation 0.5 m/s
Vertical camera – Sensor format: 1/6 inch – Resolution: 640×480 – Global shutter sensor – Black & white – FOV: horizontal viewing angle: 53.7° / vertical viewing angle: 41.5° – Focal length: 2.8 mm – Optical flow ground speed measures at 60 Hz – Point of interest calculation for accurate hovering at 15 Hz and accurate landing at 5 Hz

ANAFI Ai ­ © Parrot 2021

9

ToF: TI OPT3101 – Range: 0-15 m – Resolution: 0.3 mm – Bias: ± 2 cm (after calibration) – Measuring frequency: 64 Hz
Vertical camera IMU: ICM-42605 – Gyroscope 3 axes – Range: ± 2000 °/s – Resolution: 16.4 LSB/°/s – Bias/accuracy: ± 0.1 °/s (after dynamic calibration) – 3-axis accelerometer – Range: ± 16g – Resolution: 2.048 LSB/mg – Bias/accuracy: ± 2.0 mg (X-Y) ± 5.0 mg (Z) ­ after dynamic calibration – Measuring frequency: 1 kHz – Hardware synchronization with the vertical camera, accuracy: 1 µs
Autopilot Key features
The ANAFI Ai flight controller offers an easy and intuitive piloting: no training is required to fly it. It allows the automation of many flight modes (Flight Plan, Cameraman, Hand take-off, Smart RTH).
Sensor fusion algorithms combine the data from all sensors to estimate the attitude, altitude, position and velocity of ANAFI Ai.
State estimation is essential for the proper functioning of drones. Quadrotors are by nature unstable when the flight controller is in open loop; to pilot them easily, let alone to operate them autonomously, it is necessary to stabilize them by closed loop control algorithms. These algorithms compute and send to the motors the commands needed for ANAFI Ai to reach the desired trajectories.
Estimation ANAFI Ai estimates the states of the UAV from sensors measures via several data fusion algorithms such as extended Kalman filters. The main estimated states are the following:
– Ground speed on x, y and z axes in drone frame, in [m/s] – Attitude Euler angles (roll, pitch, yaw), in [rad] – Accelerometer bias on x, y and z axes in drone frame in [m/s²] – Gyroscope bias on x, y and z axes in drone frame in [rad/s] – Magnetometer bias on x, y and z axes in drone frame in [mG] – Pressure sensor bias, in [m] – Position on x, y axes in NED frame, in [m] – Altitude above take-off, in [m] – Altitude above ground level, in [m] – Wind on x, y axes in NED frame, in [m/s] – Propulsion vector mechanical misalignment (roll, pitch)

10

v1.6

Control Control algorithms determine the commands needed by the drone motors to reach the given guidance references from the current estimated states. The ANAFI Ai control architecture includes:
– A gravity compensation algorithm, robust to mass variations – an inner pitch/roll attitude loop – an outer position loop – an altitude loop – a yaw attitude loop
All control loops use trajectory feedforward generators to ensure the desired trajectory dynamics and disturbance rejection mechanisms.
A mixing algorithm combines the output commands from the different loops ensuring that given motor speed saturation bounds are respected. Under conditions of saturated motors, the following priority order is defined for commands application:
– Gravity compensation – Roll – Pitch – Yaw – Altitude variations
This means that in presence of strong disturbances which would make it impossible for the drone to keep stability on all axes, control errors will be allowed on the different axes by respecting the order described above.
In parallel, control will adjust the drone speed to get out of the saturated motor condition while following at its best the guidance command.
Indoor flight In the absence of GPS signal, ANAFI Ai relies mostly on the vertical camera measures for velocity and position estimation.
Vertical camera measures are given by two main algorithms.
Optical flow for velocity estimation:
· Computes a translation in pixels between two successive images at a normalized ground distance of 1 m
· This translation is multiplied by the ground distance in order to obtain a translation in meters. · To obtain a velocity measure, the translation in meters is divided by the time elapsed
between the two images · For the algorithm to perform correctly, the orientation between images must be corrected
when computing the pixels translation, which is made by using the vertical camera IMU measures · The algorithm runs at 60 Hz
Keypoint detection and matching for position estimation:
· Starts from a reference image that is taken when the drone starts a hovering with appropriate light conditions

ANAFI Ai ­ © Parrot 2021

11

· Then, at each new image, it looks for points which can also be found in the reference image (matching)
· From these matches, it computes the camera displacement between both images, for which attitude stabilization is also necessary
· The drone displacement between two images is computed from the camera displacement and the ground distance
· The algorithm runs at 15 Hz
Velocity measures from optical flow are used by fusion data algorithms along with flight IMU and ToF measures to produce a reliable velocity and ground distance estimation during hovering and flight.
During hovering, position measures from keypoint detection and matching are also added to the data fusion, thus helping ANAFI Ai maintain a precise fixed point: the drone remains stable in a sphere of a 5 cm radius at 1 m altitude ­ against +/- 10 cm for the Mavic PRO (depending on the spec). Measures from keypoint detection and matching are also used when ANAFI Ai rotates around its yaw axis, or when it performs vertical movements (no longitudinal or lateral flight), thus ensuring precise rotations and ascent/descent.
Vertical camera algorithms are still able to run in low light conditions thanks to the fact ANAFI Ai is equipped with a pair of LED lights, located next to the vertical camera. These allow the drone to keep stability, especially when flying indoors or in a GPS-denied environment, below 5 m over the ground. The LED lights power adapts automatically, depending on the algorithms’ needs.

Autonomous flight Key features
· Rotating, wide field of view perception system · Surrounding environment depth extraction from stereo matching and depth from motion · Occupancy grid representation of the environment · Autonomous obstacle detection and avoidance at speed up to 8m/s (29 km/h – 18mph)
This chapter details the sensors, hardware and algorithms used by ANAFI Ai to provide autonomous flight capabilities.
It is organized as follows:
· in-depth description of the perception system of ANAFI Ai · perception algorithms used to reconstruct the 3D environment surrounding the aircraft · replanning and obstacle avoidance
Perception system strategy 3D environment perception is a key capability to achieve autonomous flight, especially in obstructed environments. It is a condition for guaranteeing obstacle detection and avoidance, which reduces the supervision load of the drone operator, increases the mission success rate and ensures the safety of the aircraft.
An efficient perception solution is required to unlock the full potential of a flying camera, able to translate and rotate freely in all directions without constraint. In particular, a perception system must allow capturing information on the surrounding environment, in directions that are consistent with the translational flight motion ­ whatever the orientation of the camera.

12

v1.6

ANAFI Ai relies on a unique technical solution based on two mechanical gimbals to decouple the orientation of the main camera and the perception system:
· the main camera is mounted on a pitch-roll-yaw 3-axis gimbal making its 3D orientation independent from that of the drone
· the perception system is mounted on a single axis pitch gimbal ­ coupled to the drone’s yaw movement, it can be oriented in any direction
The pitch axes of the two gimbals are colinear and merged to achieve an ultra- compact design.
ANAFI Ai’s dual gimbal for perception and imaging
With this solution, it is possible to point the main camera and the perception system in two different directions. This design avoids the use of expensive cameras on the sides, top, bottom and back of the drone, while still providing a large accessible field of view to the perception system. This section is organized as follows:
· details on the sensors used for the perception system · specifications of both the gimbal of the main camera and the one of the perception system · strategies for orienting the perception system to exploit the potential of the dual gimbals
design

ANAFI Ai ­ © Parrot 2021

13

Sensors The perception system relies on a pair of identical cameras, sharing the same pitch axis.

Single axis pitch gimbal of ANAFI Ai’s perception system
The specifications of the sensors are the following:
· Model: Onsemi AR0144CSSM28SUD20 · Color: monochrome · Resolution: 1280 x 800 pixels · Frame rate: 30 fps · Global shutter · Full horizontal field of view: 118.5° (110° used for perception) · Full vertical field of view: 82.5° (62° used for perception) · Focal length: 1.55 mm (0.061″) · Aperture: f/2.7
The specifications of the stereo pair are the following:
· Shared pitch axis · Baseline/distance: 70 mm (2.44″) · Synchronous acquisition at 30 fps

14

v1.6

Dual Gimbal
The gimbal of the main camera is a pitch-roll-yaw 3-axis gimbal with the following specifications:
· pitch end stops: -116°/+176° · roll end stops: +/- 36° · yaw end stops: +/- 48°
The gimbal of the perception system is a single axis pitch gimbal with the following specifications:
· pitch end stops: -107°/+204° · time of travel from one end stop to the other: 300 ms
The perception system benefits 311° of travel (296° unmasked by the drone body), which allows backward perception.

Instantaneous vertical field of view and end stops of ANAFI Ai’s perception system
The system has been designed so that:
· the propeller blades cannot enter the field of view of the main camera · the main camera does not mask the field of view of the perception system · both the main camera and the perception system can fully tilt backward to protect the
lenses, during storage or in case of in-flight emergency

ANAFI Ai ­ © Parrot 2021

15

Horizontal field of view of ANAFI Ai’s perception system
When tilted backward, the perception system rotates high up, offering a clear view.

ANAFI Ai’s perception in fully tilted position for backward flight

16

v1.6

Quality control: the gimbals have been bench tested for 125 hours straight with no defect or wear noted. Each gimbal is tested at every stage of production, including:
· control of the motors and motors assembly · calibration of the stereoscopic cameras poses · control of hall sensors amplitude, bias and noise
The quality control data can be provided under customer request.
Perception system orientation strategies
For autonomous flight modes, a “crab flight mode” has been designed to exploit the entire field of view accessible to the perception system through the dual gimbal design:
· the 3-axis gimbal main camera is always pointed in the user defined direction · by rotating the drone around its yaw axis and by pitching single axis gimbal of the
perception system, the direction of translational flight motion is kept within the field of view of the perception system

ANAFI Ai performing crab flight, orienting its perception system in a useful direction while pointing its main camera in the specified recording direction
A simpler strategy is applied for take-off and landing phases: · At take-off, the perception system is oriented upward · For the landing, the perception system is oriented downward

ANAFI Ai ­ © Parrot 2021

17

During take-off and landing phases, ANAFI Ai’s perception system is oriented vertically
Environment reconstruction The reconstruction of the surrounding 3D environment for autonomous flight is performed in two steps:
· extraction of depth information from the perception, as depth maps · depth maps data fusion into a 3D occupancy grid
Two methods are used to generate depth maps from the perception sensors:
· depth from stereo matching · depth from motion
Depth from stereo matching
The main method used for depth information extraction relies on the parallax between the two stereo cameras of the perception system. By photographing the environment in the same direction but from two different positions, objects in the field of view of the perception system appear at different positions in the pictures produced by the two cameras. The closer the object, the closer this difference in position.
The strategy thus consists in identifying points in the pictures produced by the left and right stereo cameras corresponding to a same feature in the field of view of the perception system and measuring the position difference of these points in the two pictures. This difference is called the disparity and is measured in count of pixels.

18

v1.6

Illustration of the stereo vision principle ­ the red 3D spot appears in different positions on the left and right images
The disparity can then be linked to the depth of each of these points using the following relation:
depth = focal * baseline / disparity
where the depth and the baseline are expressed in the same unit, and the focal length and disparity are expressed in pixel count.
The result of the computation takes the form of a 176 x 90 pixels depth map, for which the value of each pixel corresponds to a depth, in meters. The depth map is updated at 30 Hz.

Example of image captured by the right camera of ANAFI Ai’s perception system (left) and the corresponding depth obtained by stereo matching (right). The color map goes from red (closer) to purple (far) ­ white meaning “out of reach”
An immediate result is that the depth measured through this method is discretized, as the disparity can only take discrete values (count of pixels). A 3D point sufficiently far from the perception system that would generate a theoretical disparity smaller than one pixel will thus be considered at infinity as the corresponding actual, discrete, disparity will be 0. The precision of the stereo matching method hence decreases with the distance, though methods exist to reduce this phenomenon by achieving subpixel discretization.

ANAFI Ai ­ © Parrot 2021

19

Discretized depth measured by “stereo matching” vs “true depth”
In addition, the disparity diverges as depth get closer to zero. Since the number of pixels in the images is limited, so is the value of the disparity. As a consequence, there is a minimum depth under which the perception system is blind. The value of this minimum depth is 36 cm (14.2 inches) for ANAFI Ai.
About calibration: each pair of stereo cameras is factory-calibrated to precisely measure the slight misalignments that may exist between the two cameras and to compensate for it in the onboard computation of the depth.
The user can also recalibrate the stereo camera pair with the test pattern provided in the packaging. In particular, to some extent, the drone is capable to detect potential calibration errors that may occur over its lifetime. In that case, the drone software will try to adjust and compensate for it, or if it fails to do so, a notification will appear to request a recalibration.
Depth from motion
The motion of the aircraft can also be exploited to collect images of the environment from different point of views, and thus reconstruct depth information. This method is called depth from motion, or monocular perception, since a single moving camera suffices to gather depth information.
The principle is similar to the stereo vision, but rather than comparing images of the environment acquired by distinct observers at same time, the perception compares images of the environment from a same observer at different times. Should the drone be moving, the images from this unique observer will be acquired from different points of view. Knowing the pose at which each frame was taken, points corresponding to a same feature in the different images can be triangulated and put back in 3D.
This results in a 3D point cloud, containing up to 500 points for ANAFI Ai, generated at 10Hz.

20

v1.6

Example of point cloud produced through depth from motion ­ the color map goes from red (closer) to purple (far)
The depth from motion algorithm in ANAFI Ai usually generates less information (sparse point cloud) than the stereo matching algorithm and requires the drone to be moving to gather information. Furthermore, this algorithm fails to extract information in the exact direction of motion (at least for straight translations) since in this direction, objects appear almost motionless in the images (focus of expansion).
However, it has a better range of detection (theoretically infinite range) that the stereo matching method.
Occupancy grid
The depth information from the stereo and monocular perception algorithms is integrated into an occupancy grid. This grid discretizes the 3D surrounding environment into 3D cubes, called voxels. To each voxel is attributed a probability to be occupied by an obstacle or, on the contrary, to be free of obstacle.
A raycasting algorithm is used to integrate the depth information into the occupancy grid. For each pixel of the depth map generated by the depth from stereo matching, converted into a 3D point, and for each point of the point cloud from the depth from motion:
· A ray is cast in the occupancy grid, from the position of the perception system, to the position of the 3D point.
· The probability of occupation of the voxel containing the 3D point is increased · The probability of occupation of all the voxels crossed by the ray -except the one containing
the 3D point- is decreased

ANAFI Ai ­ © Parrot 2021

21

Raycasting principle ­ on the right, the depth map produced by the stereo matching, on the left, a 2D slice of the occupancy grid. For each pixel of the depth map, a ray is cast from the position of the perception system to the 3D point given by the direction corresponding to this pixel in the perception system’s field of view and its associated depth. Each
voxel crossed by the ray has its probability of occupation decreased while it is increased in the voxel in which the ray ends.
The grid thus acts both as a temporal filter of the depth information, absorbing any potential noise in depth measurements and as a memory of the previous acquisitions, making it possible to navigate in complex environments, even without a continuous 360° field of view of the perception system.

Example of occupancy grid. The voxels with a high confidence of occupation are superimposed to the view of the right stereo camera, with a color going from red (close) to purple (far).
The occupancy grid constitutes the base for the motion planning algorithms used by ANAFI Ai for autonomous flight and obstacle avoidance.
Obstacle avoidance With the knowledge of the 3D environment surrounding the aircraft stored in the occupancy grid, it is possible to provide obstacle avoidance capabilities to ANAFI Ai. This offers considerable additional safety to autonomous missions but is also useful for manual flight, especially if the line of sight between the pilot and the aircraft is degraded.

22

v1.6

Every 30 ms, ANAFI Ai predicts what the nominal trajectory to follow will be over a short time horizon in the future. This prediction is deduced from the references sent by the user, whether it be piloting commands from the hand controller, waypoints to join for flight plan or an input trajectory. Then, using a simulated internal drone model, a replanning algorithm computes the smallest possible corrections to this predicted nominal trajectory that make it both collision free and feasible by the drone.

Example of corrected trajectory computed by the obstacle avoidance algorithm in response to a reference trajectory hitting a tree
ANAFI Ai’s obstacle avoidance has been designed to handle speeds up to:
· horizontal: 8 m/s (29 km/h – 18 mph) · ascending: 4 m/s (14 km/h – 8 mph) · descending: 3m/s (11 km/h – 7 mph)
Avoidance performances are limited in rainy or strong wind conditions, low light or disturbed satellite navigation environment. Also, it should be ensured that the lenses of the perception system are clean before flying.
Flight missions Key features
Air SDK (see SDK section) allows developers to access every drone sensor, camera, connectivity interface and autonomous feature. They can therefore customize the drone behavior to create Flight missions. Every Flight mission contains a set of basic behaviors or modes:
– Ground: behaviors while motors are stopped, such as sensors calibrations – Take-off: various take-off strategies – Hovering: holding a fixed point – Flying: manual and autonomous flight functions – Landing: various landing strategies – Critical: when a critical condition is detected

ANAFI Ai ­ © Parrot 2021

23

The missions developed internally by Parrot and available in FreeFlight 7 are described in the FreeFlight 7 section. The main one is the Default mission. Custom Flight missions can write new behaviors or reuse them from the Default mission.
Default Mission Take-off We have developed two behaviors, a normal take-off mode and a hand take-off mode.
Normal take-off The drone stabilizes at 1m from the ground while holding a fixed point thanks to data fusion from different sensors, even in strong wind conditions (45 km/h). The user can give the drone piloting commands on pitch, roll and yaw axes during normal take-off.
Hand take-off The drone starts the motors at minimum rotation speed before being launched by the user. Once freefall is detected, control algorithms are activated to stabilize the drone at the launch altitude.
Landing
Normal landing The drone reduces its vertical speed progressively until reaching the ground and stop the motors. The user can give the drone piloting commands both on pitch, roll and yaw axes during normal landing. However, an upward piloting command on vertical axis will abort the procedure, making the drone hold a fixed point.
Critical landing Whatever the altitude of the drone, when the amount of energy remaining in the battery is close to the energy needed to land (calculated according to the altitude and wind strength), the critical landing procedure is triggered. The user can give the drone piloting commands on pitch, roll and yaw axes, but it will not be possible to abort the procedure.
Hand landing When the drone is less than 3.5 m from the ground and hovering, the perception system automatically recognizes the presence of a hand. When a hand is detected, activating the TakeOff/Landing button triggers the hand landing process: ANAFI Ai moves horizontally directly over the hand and then descends on it. Algorithms based on sensors data have been specially adjusted to stop the motors as soon as possible after the contact of the drone with the hand.
The hand perception module is based on a convolutional neural network “hardware” of the processor used on the drone trained with a database of more than 3000 hands seen from a drone. In addition to the neural network, a verification is performed with the depth map from the stereo camera, to validate that the detected object has the characteristics of a hand.

24

v1.6

Controlled flight Piloting commands Using the Skycontroller 4 or the virtual joysticks of FreeFlight 7, the user can directly send the following inputs to the drone in manual flying mode:
· pitch angle (max 30°) · roll angle (max 30°) · vertical velocity (max 4m/s ascending, max 3m/s descending) · yaw rate (rotation velocity, max 200°/s)
The maximum reachable horizontal speeds are:
· forward: 17m/s (61 km/h – 38 mph) · backward: 16m/s (57 km/h – 36 mph) · lateral: 16m/s (57 km/h – 36 mph)
In absence of pitch or roll inputs, the drone automatically cancels its horizontal motion.
It is possible to adjust the configuration of the joysticks through the settings of FreeFlight 7.
In this mode, the yaw of the main camera is locked aligned with the yaw of the drone, but its pitch and zoom factor can be freely controlled by the user.
Obstacle avoidance
Obstacle avoidance algorithms can be activated in this mode and allows to reach:
· pitch angle (max 15°) · roll angle (max 15°) · vertical velocity (max 4m/s ascending, max 3m/s descending) · yaw rate (rotation velocity, max 200°/s)
This allows speeds up to 8 m/s (28 km/h – 18 mph) in horizontal flight with obstacle avoidance activated.
Note that even without obstacle avoidance, a safety prevents the drone to get closer than 35 cm (14 inches) from the ground. This distance is increased to about 1 meter (3 ft) with obstacle avoidance.
Geocage A geocaging feature is available in this flight mode. When activated, this restricts the drone motion in a virtual 3D cylinder centered of the take- off position. The radius and height of the cylinder can be specified through FreeFlight 7 settings menu.
Cameraman Target proposals are displayed on the screen. The user selects the target on the screen according to 2 possibilities:
· the user selects a target among those proposed · the user draws a rectangle to define a target

ANAFI Ai ­ © Parrot 2021

25

Automatic target proposal / target selection
Once the target is selected, the front camera automatically adjusts its attitude in order to keep it framed by means of visual tracking. The user can give the drone pitch, roll and vertical piloting commands to make it move around the target (the drone yaw angle is locked aligned with the front camera yaw).
The visual tracking consists in merging:
– A motion model of the target’s position based on the drone’s position – A visual tracking algorithm (optical flow, SVM-based online learning, scale change estimator) – A visual detection neural network
The SVM algorithm initiates the tracking with a single image and continues to learn the target over time. The algorithm is able to track changes in the silhouette of the target, e.g. the algorithm tracks changes in direction of a moving vehicle (side view followed by rear view).

26

v1.6

The convolutional neural network identifies objects in the scene and recognizes them regardless of their silhouette. This network is optimized on a base of Parrot images from our drones for a high level of reliability.
The network is able to detect different classes of objects such as: people, cars, bikes, motorcycles, boats, animals (cow, dog and cat), etc. These are the objects of these classes that are proposed to the users to select a target.
POI From the map, this flight mode allows to give the drone a point of interest by a simple designation on the screen. The GPS coordinates of the selected point are transmitted to the drone and the front camera automatically adjusts its attitude to keep it framed. The user can give the drone pitch, roll and vertical piloting commands to make it move around the POI (the drone yaw angle is locked in alignment with the front camera yaw).
Autonomous flight Flight Plan The flight plan mission allows the user to have the aircraft autonomously join a sequence of viewpoints at specified speeds, while respecting specified camera orientation behaviors.
The mission can be prepared in off-line mode on the control screen by setting the waypoints along with their altitude (above take off), the associated camera orientation and the speeds to join them. Parrot has simplified the ergonomics of mission preparation, an operation that is generally complex for the user. The flight plan can be edited, modified and replayed as many times as necessary.
Flight planning is possible without a radio connection to the remote control, but it requires the drone to have access to a satellite navigation system.

Screen capture of FreeFlight 7 Flight Plan editor
Obstacle avoidance algorithms can be activated in this mode. Available view modes include:
· Constant camera angle · Progressive camera angle

ANAFI Ai ­ © Parrot 2021

27

· Point of interest
The maximum achievable speeds are:
· horizontal – without obstacle avoidance: 12 m/s (43 km/h – 27 mph) · horizontal – with obstacle avoidance: 8 m/s (29 km/h – 18 mph) · ascending: 4 m/s (14 km/h – 8 mph) · descending: 3m/s (11 km/h – 7 mph)
In case of failure to join a waypoint (inaccessible waypoint for instance), the drone will automatically join the next one after a short period of time.
The flight plan can be paused and directly restart to the currently join waypoint.
Touch & fly From the map, this flight mode allows to give a destination to the drone by a simple designation on the screen.
The GPS coordinates of the selected point are transmitted to the drone.
Return Home (RTH) Accurate The vertical camera takes a reference image at the end of take-off. When the drone lands or begins the descent towards the take- off area in RTH, the algorithm takes new images in order to determine the displacement relative to the reference image. This displacement is used as a guidance reference for the drone.
Smart The drone calculates the amount of energy needed to return to the home coordinates, depending on the distance and wind strength, and compares it to the energy remaining in the battery. When both energies are close enough, a critical RTH is triggered. This procedure can be cancelled by the user.
Obstacle avoidance is available in this mode (see Autonomous Flight – Flight Plan section for flight limits).

28

v1.6

Vehicle Mission

FreeFlight 7 Vehicle Mission selection
The objective of this mission available on FreeFlight 7 is to autonomously fly in a vehicle-relative frame (car, truck, boat…) with high precision. It also allows to take off and land on a moving vehicle.
This mission implements new ground, take-off, hovering, flying, landing and critical modes to fly relative to the vehicle. The drone front camera is not necessarily oriented towards the vehicle, it can be guided by the user. The controller and the phone must remain in the vehicle.
In this mission the drone is enriched with specific computer vision algorithms that provide the vehicle position and velocity when it is above and in the field of view of the drone cameras. The position and velocity measures from the different computer vision algorithms are fused with the information from the controller’s GPS and barometer to accurately estimate the vehicle trajectory and then use it as a reference for the drone.
At any time, the user can over-pilot and/or take control of the drone if no information is given by the drone cameras.
A custom flight mode allows the drone to return autonomously above the vehicle at the user’s demand or when the amount of energy needed to reach the vehicle gets close to the energy remaining in the battery. The vehicle position is then predicted according to its speed and direction to optimize the distance and the path to travel.

ANAFI Ai ­ © Parrot 2021

29

Hybrid stabilization
The ANAFI Ai camera has the most accurate stabilization of the micro-UAV market.
It combines a double stabilization:
· 3-axis with the mechanical gimbal · 3 axis with electronic stabilization (EIS)
The mechanical stabilization stabilizes the camera’s aiming axis regardless of the drone’s flight attitude. The electronic stabilization of the image allows to correct the effect of the micro-vibrations for the frequencies beyond 100 Hz which cannot be managed by a mechanical actuator.
Main camera gimbal The mechanical stabilization allows the stabilization and orientation of the camera’s horizontal viewing axis on all 3 axis.

The 3 axes of rotation of the main camera’s gimbal
Key features · 3 axes of mechanical stabilization for the main camera · 292° vertical displacement, field of view from -116° to +176°

30

v1.6

Performances

Gimbal performances

ANAFI Ai

Skydio 2

Angular stabilization

±1°

accuracy

No data

End stops

Pitch: -116/+176°

Pitch: ±124°

Roll: ±36°

Roll: ± 120°

Yaw: ±48°

Yaw: ± 12.5°

Piloting range

±90° (pitch axis)

-110°/+45°

Maximal rotation speed ±180°/s (pitch axis) No data

Protection

Front camera crashproof selfprotection

No data

MAVIC 2 Air No data
Pitch: -135°/+45° Roll: ±45° Yaw: ±100° -90° /24° 100°/s None

The EIS algorithm corrects the effects of wobble and the distortion of the wide-angle lens, and it digitally stabilizes the image along the 3 axes (roll, pitch and yaw).
The method consists of applying a geometric transformation of the image. The geometric transformation is associated to a timestamp and a precise position thanks to the IMU.
A geometrical transformation is applied to each image according to the distortion of the optics, the wobble and the movements of the camera module measured.
292° tilt range
The camera can tilt vertically by -116/+176° around the pitch axis, providing observation above and below the drone. This is a unique capability in the micro-UAV market.

ANAFI Ai ­ © Parrot 2021

31

4G Connectivity
Key features
· Always connected, worldwide continuous connection to the Internet · Seamless Wi-Fi to 4G switch: flies Beyond Visual Line Of Sight · Worldwide LTE compatibility · Real time streaming with 12 Mbps video bitrate / Low latency 1080p streaming · In-flight cloud file transfer
4G
ANAFI Ai integrates a Fibocom L860-GL 4G LTE module (in addition to the Wi-Fi radio) allowing to transmit a video in 1080p with a very low latency (300 ms) without range limit and everywhere in the world.

Compatibility
The ANAFI Ai 4G module supports more than 28 frequency bands, covering more than 98% of the frequencies deployed around the world.

LTE FDD: LTE TDD: DL 4×4 MIMO: UMTS: LAA:

1, 2, 3, 4, 5, 7, 8, 12, 13, 14, 17, 18, 19, 20, 21, 25, 26, 28, 29, 30, 32, 66 bands B38/39/40/41/42 bands 1, 2, 3, 4, 7, 30, 40, 41NB, 66 bands 1, 2, 4, 5, 8 bands band 46

Automatic network switch
The quality and capacity of the 4G and Wi-Fi networks are measured every 100 ms to adapt the streaming to the network conditions. Combined with the routing algorithms, the connection between the drone and its controller is maintained even when the Wi-Fi is strongly disrupted. Thus, when the effective throughput (goodput) of Wi-Fi is lower than 1.5 MBps, the system automatically switches to 4G.

32

v1.6

To limit mobile data consumption when the pilot is within range of the drone’s Wi-Fi network, the 4G to Wi-Fi transition is also automatically performed, without video stream cut.
Implemented video stream optimization algorithms:
Congestion control 4G
The congestion control algorithm allows to:
· Measure packet loss over the entire network loop · Measure the latency (Round Trip Time) · Adapt the throughput according to these two parameters
The final objective of the algorithm is to maximize the available throughput while maintaining the lowest possible latency. This algorithm is implemented on each of the interfaces available on the drone, each one having its own parameters optimized according to the network. Thanks to the information provided by this algorithm, the link supervisor decides on the routing and the active interface.
Remote control drone connection in 4G
ANAFI Ai connects to the 4G remote control in less than 30 seconds when the drone is not in Wi-Fi range, and in less than 15 seconds when the drone is in Wi-Fi range.
– Discovery and initiation of the connection based on the VOIP SIP protocol – Use of a relay server to establish the connection on secured networks
Video stream performances
– Latency: 300 ms – Security: Video and controls secured by SRTP/DTLS in accordance to webRTC – Antennas: 28 LTE bands between 700 MHz and 2.6 GHz

ANAFI Ai ­ © Parrot 2021

33

Wi-Fi
Key features
· Max range: 4 km (CE), 9 km (FCC) with the Parrot Skycontroller 4. · Gain in 2.4 ANAFI Ai: 3.5 dBi +/-1.5 dB (MAVIC Air 2: 1.5dBi +2/-6 dB) · Gain in 5 ANAFI Ai: 3.5 dBi +/-1 dB, (MAVIC Air 2: 1.5 dBi +3/-7 dB)
Omni-directional transmission system
ANAFI Ai has 4 directional antennas with reflector (gain: 2.5 dBi/antenna). The drone determines the best antenna pair depending on its orientation and position in relation to the pilot’s position.
With a recombined gain of 3.5 +/-1.5 dBi in the horizontal plane of the drone, the radio gain of ANAFI Ai is strong homogeneous.
The downward radiation of the antennas has been improved by +4 dB compared to ANAFI.
High Power Radio Front-End Design
The radio front end allows to push the power to the maximum at the foot of the antenna with a very good level of linearity and sensitivity (-94 dBm at 6.5 Mbs), allowing to reach the maximum power of the FCC limit.
Robust Wi-Fi Connection
Parameters 802.11
A subset of protocol parameters was selected to optimize performance in the drone use cases: relatively low throughput, low latency, variability in reception levels due to drone speed, long range, presence of interferers. These parameters include aggregation, number of retries, MiMo technology (STBC), management frame datarate, and disconnection conditions.
Flow adaptation and monitoring
ANAFI Ai continuously monitors its connection status at 4 Hz and can detect the presence of interference. This allows to dynamically optimize the throughput and the size of the transmitted packets. It also alerts the pilot if he is in a particularly interference-laden environment, or if he is close to losing the signal.
Smart interference avoidance system
ANAFI Ai has a channel avoidance algorithm (dual band 2.4 GHz and 5 GHz) in case of interference detection.
Bandwidth reduction
At the limit of its range and if the conditions allow it, ANAFI Ai can switch to 10 MHz bandwidth to improve its sensitivity by 3 dB and gain 40% in range.

34

v1.6

Radio Performances

Physical layer: List of channels:
2.4 GHz EU 2.4 GHz US 5 GHz EU 5 GHz US
Average at 2.4 GHz: Radiance variability (2.4 GHz): Average at 5 GHz: Radiance variability (5 GHz): Sensitivity (antenna bottom): Overlapped: ACR (+25 MHz):

OFDM (Wi-Fi)
2.412 to 2.462 GHz (5 MHz steps) 2.412 to 2.462 GHz (5 MHz steps) 5.745 to 5.825 GHz (20 MHz steps) 5.745 to 5.825 GHz (20 MHz steps)
0 dB (ref) -1 dB 0 dB (ref) -1 dB -95 dBm 0 dB (ref) 0 dB (ref)

Video streaming
Key features
· H264 encoding with RTSP and RTP transmission protocols · Internet Stream Sharing with RTMP protocol from the 4G phone on the remote control · Local Stream Sharing with HDMI video output on the remote control · Multi-camera: ability to access stereo, vertical and disparity map camera streams · The video stream is compatible with RTP players, like VLC or mplayer · 1080p at 30 fps, up to 8 Mbit/s · Compatible with the H264 and MPEG standards · Reduced latency (< 300 ms glass-to-glass)
Stream performance

Resolution: Frame-rate: Bit rate: Video encoding: Protocol: Latency:

up to 1080p up to 30 fps up to 8 Mbit/s H.264/AVC RTP streaming with RTSP session management 300 ms

Implemented video stream optimization algorithms:
Parrot Gen4 Streaming (4th generation)
Advanced encoding for Error resilience
The H.264 stream is tuned to minimize the impact of packet loss on the radio network, and to avoid network congestion. Error propagation due to inter-frame prediction is also minimized.
The algorithm combines the following methods:

ANAFI Ai ­ © Parrot 2021

35

· Slice-encoding: divides frames into 45 independent portions to minimize the impact of sporadic network losses.
· Intra-refresh: images are encoded in inter prediction (P-frames) only to smooth the stream flow and avoid network congestion. Random access to the stream is allowed by intra encoding successive image bands that scan the image surface over time. The intra bands are present every 3 frames on a 9-frame pattern (for a complete refresh every 27 frames ­ about one second).
· Non-reference images: these images are not used as reference for other images in the stream and are therefore totally independent; thus the loss of these images during transmission has no other consequences than for the image itself. 2 out of 3 images are non-reference.
Error concealment
This algorithm reduces the visual impact of losses on the network and it enables the interoperability of all decoders, while ensuring a syntactically complete stream: missing images parts are reconstructed as skipped portions, identical to those of the reference image.
The glitches are therefore contained within zones impacted by losses, and do not spread to the entire image.
The following graphs illustrate the rate of success in decoding macroblocks, for a network loss rate of 5% – with and without ANAFI Ai’s advanced streaming functions. The algorithm ensures a correct decoding for 75% of the macroblocks. Those enable the user to carry on with his mission without screen freeze or streaming loss.
Congestion control
The algorithm estimates the Wi-Fi and radio environment to anticipate and avoid packet loss and congestion on the network, thus helping to reduce latency.
The algorithm is based on an estimate of the link capacity calculated from the data rate and the error rate at the physical level; it then acts on the network encoding and encapsulation parameters.
Metadata
Metadata are transmitted with the video stream. They notably contain drone telemetry elements (position, altitude, speed, battery level, etc.) and video metrics (angle of the camera, exposure value, field of view, etc.).
The synchronization of the images and the metadata open functions as precise map positioning, flight instrument tracing within the HUD or augmented reality elements inclusion.
The inclusion of metadata is using standard methods (RTP header extension); the format of the data, defined by Parrot, is public: it is available within ANAFI Ai’s SDK.

36

v1.6

Camera
Key features
– 48 MP sensor – 4K video at 60 fps – HDR10 ­ HDR8 – 14 EV dynamic range – 6x zoom
48 MP sensor

ANAFI Ai’s sensor supports a high megapixel count for finely detailed aerial shots.
It uses a Quad Bayer color filter array, where groups of 4 adjacent pixels come in the same color. Real-time HDR capture can be obtained in both photo and video modes, by adding the signals from the four adjacent pixels.
Its dynamic range is 4 times greater than conventional sensors, featuring a regular Bayer matrix. Even difficult scenes can be captured with minimal highlight blowout or loss of detail in shadows.
Lens design

The ANAFI Ai lens is specifically designed for Parrot. It integrates 6 aspherical elements and has been optimized for low optical flare.
This lens gives a 68° HFoV in Standard video mode, and a 64.6° HFoV in Standard photo mode.

ANAFI Ai ­ © Parrot 2021

37

Video modes Description of modes
ANAFI Ai shoots smooth 4K 60 fps video, including in P-Log, as well as HDR10 4K videos up to 30 fps. The following table summarizes all ANAFI Ai video modes. Default values appear in bold.

Video encoding
Our users can choose between H.264 (AVC) and H.265 (HEVC) formats.
For all resolutions, we use the following pixel formats: – YUV420p (8bits/component, BT.709 colorspace) for Standard and HDR8. – YUVJ420p (8bits/component, full range – BT.709 colorspace) for P-log style. – YUV420p10 (10bits/component, BT.2020 colorspace) for HDR10 recording, in H.265 only.

38

v1.6

HDR
When recording videos in HDR8 and HDR10, ANAFI Ai covers a 14 EV dynamic range.
The HDR10 format allows for a maximum brightness of 1,000 nits and 10 bits color depth. It provides a billion-color palette, against 16 millions for Standard Dynamic Range. Compared to HDR8, HDR10 allows for an image that is over twice as bright, with a matching increase in contrast.
HDR8 can be displayed on any standard screen, while HDR10 is designed for HDR10 TVs and screens.

Photo modes
Description of modes
The drive modes category gathers the settings that are related to the number of images that are taken for each shutter release.
Single capture mode: Full frame is acquired and immediately processed.
Bracketing mode: Users can take a burst of 3, 5 or 7 frames with a different exposure for each frame. The available presets are the following:
· [-1 EV, 0, +1 EV] (default setting). · [-2 EV, -1 EV, 0, +1 EV, +2 EV]. · [-3 EV, -2 EV, -1 EV, 0, +1 EV, +2 EV, +3 EV].
Burst mode: Users can take a burst of 10 frames in 1 second.
Panorama modes: · Spherical (360°) ­ Sphere, Little Planet and Tunnel outputs · Horizontal (180°) · Vertical (109°) · New Superwide mode, stitching 9 images (HFOV 110°, rectilinear)

ANAFI Ai ­ © Parrot 2021

39

Refer to this table for details of Panorama output properties:
Timelapse mode: This mode allows taking pictures at the following fixed time intervals:
· 48 MP: 1, 2, 4, 10, 30 or 60 seconds · 12 MP: 0.5, 1, 2, 4, 10, 30 or 60 seconds GPS Lapse mode: This photo mode has been designed for inspection and photogrammetry. It enables taking pictures at the following fixed distance intervals: 5, 10, 20, 50, 100 or 200 meters. The following table summarizes photo modes and resolutions along with sensor readout mode.

40

v1.6

Settings
The following table summarizes the available settings for each mode.

ANAFI Ai ­ © Parrot 2021

41

6x zoom
Zoom is available in all photo and video modes.
Combined with the 48 megapixels sensor, precise sharpening algorithms enable high definition still images, even when using the 6x digital zoom. ANAFI Ai users are now able to see 1 cm details at 75 m.
High pixels count also allows cropping 4K videos to 1080 p, with barely any loss of quality.

42

v1.6

Photogrammetry

Key features
· · · · · ·

Compatible with the PIX4D Suite Photogrammetry flight modes available in FreeFlight 7 and OpenFlight Single-click flight plan creation 48 MP stills at 1 fps: twice as fast as competitors Survey-grade accuracy: 0.46 cm/px GSD at 30 m (100 ft) In-flight 4G transfer to PIX4Dcloud

Aerial drone imagery is changing the way professionals carry out inspections and surveying.
Photogrammetry techniques are used to process images collected by UAVs to create 2D and 3D models. These digital twins allow our customers to plan maintenance.
4G brings an unprecedented robustness to the drone data link. Our customers can operate their drones in large areas, near metallic structures, buildings, without losing the connection.

Positano, Italy ­ modelled with ANAFI Ai

ANAFI Ai ­ © Parrot 2021

43

Designed for inspection and mapping Best in-class sensor

ANAFI Ai embeds a 48 MP 1/2” CMOS sensor. Equipped with a Quad Bayer color filter pattern, specifically suited for inspection and photogrammetry. Our sensor allows to capture highly detailed images with a wide dynamic range.
· High resolution stills ANAFI Ai generates 48 MP still photos, so customers can capture all details in high definition and generate high density point clouds.
· Great sharpness Inspection missions require the ability to identify minute details, such as serial numbers, connectors, rust spots, and incipient cracks.
· Wide dynamic range 10 stops of dynamic range in Standard mode, 14 stops in HDR mode. Optimal gradation of your images is essential to generate consistent point clouds and high quality 2D or 3D reconstructions.
55% more details than 1” sensors The 1/2” Quad Bayer 48 MP sensor outperforms in sharpness the 1” 20 MP sensors embedded in several professional drones. The following images taken from a roof inspection mission illustrate this well.

44

v1.6

Ideal for inspection The ANAFI Ai gimbal embeds a 6-axis hybrid (mechanical + electronic) stabilization system, which compensates the flight vibrations and guarantees sharp images.
6-AXIS STABILIZATION
ANAFI Ai’s camera has a controllable range of -90° to +90°, making it the ideal aerial tool for inspecting the underside of a bridge deck.
±90° CONTROLABLE TILT

ANAFI Ai ­ © Parrot 2021

45

Survey-grade accuracy

ANAFI Ai allows users to reach a 0.46 cm/px GSD at 30 m, meaning a relative accuracy in planimetry to within 0.92 cm.

For comparison, at the same altitude, the DJI Phantom 4 Pro v2 only delivers a GSD of 0.82 cm/px. In other words, ANAFI Ai can map the same target while flying over 1.5 times higher than a Phantom, with an equivalent level of detail.

ANAFI Ai

Phantom 4 Pro v2

Image width (px)

8,000

5,472

Image height (px)

6,000

3,648

Sensor width (mm)

6.4

13.2

Sensor height (mm)

4.9

8.8

GSD (cm/px) at 30 m

0.46

0.82

Harness the power of AI and 4G
One app. Any flight plan.
Launch all your surveying, inspection and photogrammetry missions from FreeFlight 7.
· Photogrammetry flight modes available in FreeFlight 7 Grid, double-grid and circular missions from the PIX4Dcapture flight planning app are now integrated into the ANAFI Ai flight app.

· Single-click flight plan creation A click on FreeFlight 7 3D interactive map is all it takes to generate a quick building scanning. Artificial Intelligence automatically defines optimum flight parameters and trajectory. The 48 MP images precisely geotagged by ANAFI Ai sensors (IMU, GNSS & ToF) allow an accurate 3D reconstruction.

46

v1.6

FreeFlight 7’s mapping background comes from ArcGIS. 3D buildings representation is based on OpenStreetMap data, which covers cities all over the world.
Automatic photogrammetry flight plan created with a single click on 3D land registry
· When AI meets visual inspection By combining the artificial intelligence of our drone with that of PIX4Dinspect online platform, our users will perform inspections faster and more efficiently than ever.
Machine learning algorithms recognize antennas on cell towers, and extract their dimensions, height, downtilt, azimuth and plumb. Click the images below to explore demo 3D models.

ANAFI Ai ­ © Parrot 2021

47

Autonomous flight ANAFI Ai’s omnidirectional sensing system, based on stereoscopic vision, is automatically oriented in the drone’s direction of movement.
The drone detects obstacles located at 30 m (100 ft). While performing a flight plan, Artificial Intelligence technology continuously builds and updates an occupancy grid. It represents the drone’s environment in voxels.
Occupancy grid obtained during a flight in a forest, observed from the drone’s viewpoint
Algorithms determine the best trajectory to avoid obstacles, while the drone stays focused on its objective: the flight mission.

48

v1.6

4G connectivity ANAFI Ai is the first commercial micro-drone equipped with a 4G module. It covers more than 98% of the frequencies used worldwide.
Inspection of high-voltage power lines Captured by ANAFI Ai, processed by PIX4Dmatic – scene size: 4060 x 60 x 70 m (13,320 x 200 x 230 ft) ­ 2,172 images
GSD: 1.3 cm/px ­ altitude: 90 m (295 ft) ­ front / side overlap: 90% / 65%
The drone seamlessly switches from Wi-Fi to 4G, and vice versa, favoring the strongest connection. This guarantees: · Ability to conduct BVLOS flight operations · Stable connection, even when flying in environments cluttered with obstacles and buildings · Safe flights in interference-rich environments · 1080p 30 fps live streaming · Direct images upload to cloud servers

ANAFI Ai ­ © Parrot 2021

49

Flight time optimization Achieve significant time efficiency thanks to ANAFI Ai’s imaging specifications: · The 48 MP allows the drone to fly approximately 1.5 times higher than drones with 1” 20 MP
sensors, while reaching the same GSD. Higher altitude and faster mission go hand in hand. · Photo capture at 1 fps: ANAFI Ai shoots twice faster than Autel EVO 2 and DJI Phantom 4 Pro v2.

In-flight 4G transfer to PIX4Dcloud
In the process of creating a digital twin from UAV aerial imagery, file transfer and photo processing are two time-consuming tasks.
ANAFI Ai helps our customers speed up their workflow and:
· transfer images to secure servers during the flight, leveraging the drone’s 4G connectivity. · immediately start computing objects when flight ends: orthomosaic maps, point clouds,
elevation models and textured mesh. · easily share survey-grade 2D maps and 3D models with collaborators and clients.

CAPTURE 50

PROCESS

VISUALIZE & SHARE v1.6

Compatible with the PIX4D Suite A unique suite of mobile, desktop and cloud Photogrammetry applications, consisting of:
PIX4Dcloud Online professional image processing
PIX4Dmapper Advanced data processing software
PIX4Dscan Flight planning for telecom tower inspections
PIX4Dinspect Industrial inspection automation and asset management
PIX4Dreact Offline 2D mapping for emergency response and public safety

ANAFI Ai ­ © Parrot 2021

51

Photogrammetry or LIDAR? Why you should choose photogrammetry for surveying and inspections
Aside from its affordability and ease of use, aerial photogrammetry is your best choice when a visual interpretation of data is needed.
Photogrammetry indeed: · provides multiple renderings: orthomosaic maps, colored point clouds, textured mesh · generates higher density point clouds, each containing a significant amount of information
(elevation, texture, color) · outperforms LIDAR in the accuracy of 2D and 3D renderings, which are photorealistic ­ LIDAR
does not provide a high level of contextual detail LIDAR cons · Costs up to 100 times more than a UAV with an RGB sensor · Complexity of data processing and potential errors of interpretation · Adding color information is possible, but it makes the process even more complex · Reserved for specific use cases and situations, LIDAR generates only point clouds

52

v1.6

Partner Ecosystem
We keep on expanding our global ecosystem of drone software providers through our SDK Partner Program, offering specialized and unique solutions for the growing and evolving needs of professional users. Professionals can leverage our compatible drone software ecosystem, from fleet management, flight plan and flight logs import, to surveying, mapping, monitoring, and data analysis to get the most of their flight operations. They are part of our Ecosystem:
PIX4D
2D/3D modeling applications: a unique suite of mobile, desktop and cloud photogrammetry applications https://www.pix4d.com
Skyward, a Verizon company

Integration and training for ANAFI range: aviation management platform, including airspace, LAANC access, as well as training, hardware, and connectivity for enterprise drone fleet deployments. https://skyward.io
DroneSense

DroneSense is the most complete drone management and collaboration platform for public safety. https://www.dronesense.com

ANAFI Ai ­ © Parrot 2021

53

Hoverseen
Automatic drones for video surveillance. Flying cameras for the security and inspection of industrial sites. https://www.hoverseen.com/fr/
Aloft
The best platform for flight and drone workflows: Aloft unifies the mission, aircraft, and data to make drone operations safe and reliable. https://aloft.iai
Survae

An intelligent video, image, and sensor data platform: manage large volumes of 360° video, imagery, geospatial and time-based data with ease, see the complete picture from land, air, and sea on interactive map. https://www.survae.com/parrot/
DroneLogbook

Comprehensive management platform: DroneLogbook is the industry leader for automating importing flight logs.
https://www.dronelogbook.com/hp/1/index.html

54

v1.6

Dronelink
Flight plans management platform: Dronelink is an innovative flight app that automates drone missions. https://www.dronelink.com
QGroundControl
Intuitive and powerful open-source Ground Control Station for the MAVLink protocol: QgroundControl provides full flight control and mission planning for any MAVLink enabled drone. http://qgroundcontrol.com
UgCS

Leading drone control software: safe and efficient toolset for UAV land surveying and industrial inspections. LIDAR, Magnetometer, GPR, automatic Area, Photogrammetry, Corridor, Vertical, and SAR survey planning tools with custom elevation data import and terrain following mode. https://www.ugcs.com
High Lander

Highlander industry-leading fleet management platform, Mission Control, provides drone fleets with autonomous flight, intelligent airspace control, and coordinated air continuity.
https://www.highlander.io

ANAFI Ai ­ © Parrot 2021

55

Rapid Imaging
Rapid Imaging is a situational awareness technology provider delivering products leveraging geospatial augmented reality and artificial intelligence to Government and Enterprise Customers. https://www.rapidimagingtech.com/
AirData
AirData UAV is the largest online fleet data management and real-time flight streaming platform. It offers full-spectrum drone management that includes pre and post-flight data analysis. https://airdata.com/
Textron System
Textron Systems provides full support for intelligence, surveillance and reconnaissance missions. https://www.textronsystems.com

56

v1.6

Software Development Kit
Key features
· Embed your code on the drone with Air SDK · Create mobile apps with Ground SDK · Customize open-source GCS with OpenFlight · Use state-of-the-art drone simulation with Sphinx · Develop python scripts with Olympe · Process video and metadata with PdrAW
Parrot Software Development Kit is a set of tools and open-source software for developers. Since the first A.R. Drone, open-source is at the heart of our platforms and developer tools, and Parrot continually contributes to the open- source community.
All our SDK resources are available for free, without registration or tracking, on the Parrot Developer Portal https://developer.parrot.com.
Join thousands of developers on our forum and discuss directly with Parrot engineers.
Write code. Build apps. Create Missions. Fly drones!

ANAFI Ai ­ © Parrot 2021

57

Run your code on ANAFI Ai
Air SDK provides a breakthrough technological architecture to load and run code directly on ANAFI Ai. Developers can program custom-designed flight missions with access to all drone sensors, connectivity interfaces, and autopilot features.
Air SDK give onboard access to:
· All the sensors (IMU, GPS, TOF, …) and flight states · Video streams and metadata from all cameras · Wi-Fi, 4G, USB communication interfaces · Depth maps and occupation grids · Obstacle avoidance trajectory generation
Any developer can:
· Modify the drone state-machine by creating Flight Missions · Change the guidance mode · Add onboard Linux processes (such as transferring data on the 4G link or use computer
vision.)
Air SDK supports C++ and Python programming language. Air SDK comes with a comprehensive installation guide and API documentation. Many sample applications illustrate all the possibilities offered by its unique architecture.
Get Air SDK documentation on Parrot Developer Portal.

58

v1.6

Air SDK software architecture

ANAFI Ai ­ © Parrot 2021

59

Create powerful mobile applications
Ground SDK is a Ground Control Station (GCS) framework for mobile devices (both iOS and Android are supported). It allows any developer to create a mobile application for ANAFI Ai, controlling the drone directly from the mobile device. All the features of the aircraft (control, video, settings) are accessible through an easy-to-use and fully documented API.
Get Ground SDK documentation on Parrot Developer Portal.

60

v1.6

Open-source ground control station
For the first time, Parrot is open-sourcing its ground control station application. OpenFlight is the open-source core of our acclaimed FreeFlight 7 application. As a developer, you can focus on adding your own features and will immediately get a professional-looking App ready to be published on the application stores. OpenFlight contains :
· All the UX of FreeFlight 7 · All the settings interfaces · The code for real-time 3D visualization of obstacles · The code for4G connectivity management. OpenFlight is published under the BSD-3 license with a full installation guide and comprehensive documentation.
Get OpenFlight code and documentation on Parrot Developer Portal.

ANAFI Ai ­ © Parrot 2021

61

3D photorealistic simulation
Parrot Sphinx is a state-of-the-art drone simulation tool. Parrot engineers use it to develop and test all the features of ANAFI Ai.
The general concept is to simulate the real drone firmware, with all its sensors, in a visually and physically realistic environment.
Parrot Sphinx allows you to:
· Simulate all camera and sensors · Simulate depth maps and segmentation images · Navigate into many realistic 3D scenes · Connect to different types of remote controller · Use scripts to control the simulation · Add pedestrians and vehicles · Visualize and record flight data · Configure drone sensors and surrounding physics elements · Control the real-time factor
Parrot Sphinx is built on cutting-edge industry standard components:
Gazebo, the open-source robotic simulation framework from OSRF
Unreal Engine, the advanced real-time 3D creation tool from Epic Games

Get Sphinx documentation on Parrot Developer Portal.

62

v1.6

ANAFI Ai in the Sphinx simulator

ANAFI Ai ­ © Parrot 2021

63

Use Python to control ANAFI Ai
Olympe provides a Python controller programming interface for ANAFI Ai. The original purpose of Olympe is to interact with the simulation environment Sphinx. Olympe can also control a physical ANAFI Ai from a remote computer.
Olympe framework allows you to:
· Connect to simulated or physical ANAFI Ai · Send command messages (piloting, camera orientation, RTH, Flight Plan) · Start and stop video streaming of all cameras · Record the video stream and the synced metadata

Example of command sequence in Olympe
Get Olympe code and documentation on Parrot Developer Portal.

64

v1.6

Advanced media player PdrAW is an advanced video viewer for ANAFI Ai media. The viewer supports both streamed (RTP/RTSP) and recorded (MP4) videos, on Linux, macOS, Android, and iOS platforms. PdrAW comes as a library (libpdraw), a wrapper library (libpdraw-backend), and a standalone executable (pdraw).
PdrAW video pipeline PdrAW also manage video metadata. On ANAFI Ai, both the streamed and the recorded video embed metadata that are publicly accessible and documented, allowing advanced processing of aerial videos.

Get PdrAW code and documentation on Parrot Developer Portal.

ANAFI Ai ­ © Parrot 2021

65

C.A.D. Models
We make available 3D models of our drones to integrate into your CAD project for rapid prototyping and accessories integration.

ANAFI Ai 3D model in C.A.D. software
Get ANAFI Ai C.A.D. model on Parrot Developer Portal.

66

v1.6

MAVLink and GUTMA compatibility
ANAFI Ai is compatible with the open-source standard MAVLink v1 protocol, allowing for real-time data exchange between the UAV and a control station. ANAFI Ai can be flown manually or using automatic flight planning from MAVLink compatible base station, such as QgroundControl.
For more information on the MAVLink protocol, refer to https://mavlink.io/en/

QgroundControl interface
ANAFI Ai is compatible with the open-source GUTMA standard protocol for flight data. The Global UTM Association is a consortium of major players in the field of airspace traffic management. For more information on the GUTMA protocol, refer to https://gutma.org.

ANAFI Ai ­ © Parrot 2021

67

Skycontroller 4
Key features
· Micro-HDMI port for real time broadcast to screen or FPV goggles · Detachable and storable joysticks for ease of transport · Compatible with devices up to 8″ (iPad Mini) · Quick USB-PD charging through the USB-C port (1h40 for a 99% charge) · 2h30 autonomy (over 4 ANAFI Ai batteries flight time) · Dust resistant: IP5X · 4G compatible (with a SIM card in ANAFI Ai)
The Skycontroller 4 remote control shares ANAFI Ai’s visual identity. Combined with FreeFlight 7, it is designed for performance and comfort of use.

68

v1.6

Operations · Skycontroller 4 connects to the drone as soon as powered on · The gap between the handles is adjustable to ease the insertion of the phone and to hold it safely · Included cables enable the connection of the Skycontroller 4 with Lightning and USB-C connectors equipped devices. These short cables stay out of the way of the user, into the handle
Design Skycontroller 4’s design features:
· Ergonomic handling with an easy access to buttons and triggers · Compatibility with phones and tablets up to 8” · A double slide system for the integration of the device into the ecosystem · Hollow handles to accommodate cables and joysticks · Two rings to attach a camera-style neck strap
Connectors · Side USB-C port for iOS or Android device connection · Bottom USB-C port for quick charging · Micro-HDMI port for video output
Connectivity · Transmission system: Wi-Fi 802.11a/b/g/n (Wi-Fi beacon) · Frequencies: 2.4 ­ 5 GHz, · Maximum Wi-Fi range: 4 km (CE), 9 km (FCC), · No maximum range in 4G mode
HDMI The video stream from ANAFI Ai is decoded directly by the Skycontroller 4 and it can be broadcast in real time (1080 p, 30 fps), through the HDMI output, to a screen or to FPV goggles.
Battery · 18650 Li-ion (3350 mAh, 7.2 V) · Charges in 2h with the enclosed USB-PD charger · Operates and keeps the device charged for 2h30 on a single charge (the Skycontroller 4 supplies at least 500 mA as per USB standards) · Same wintering mode as ANAFI Ai smart batteries · Smart charging function: the charging power is automatically adapted to the ambient temperature to avoid cell damage
Durability · Dust resistant: IP5X · Operating temperature range: -10°C to 40°C.

ANAFI Ai ­ © Parrot 2021

69

Performances

Maximal range: Maximal compatible device size: Charging time: Autonomy on a single charge: Video output: Dimensions: Mass:

4 km (CE), 9 km (FCC) 210×135 mm 2 hours 2h30 (keeping the device charged) micro-HDMI 238 x 147 x 55 mm 606 g

Quality · The Skycontroller 4 and its battery are CE and FCC certified · The Skycontroller 4 is USB-IF, Mfi and HDMI certified · Each product is verified during production · Joysticks and tilters are factory-calibrated for optimal precision
Durability tests are performed at each development stage, to optimize the design of the Skycontroller 4.

70

v1.6

Smart battery
Key features
· Smart Battery Management System · Smart Charging optimization · Wintering mode for protection over time · Black Box function for battery health monitoring

Performances
Mass: Battery Energy density: Charging Time/Autonomy: Cells: Capacity: Charger: Durability: Storage temperature range: Charging temperature range: Operating temperature range:

355 g 221 Wh/kg 150 min / 32 min 3 x LiPo 6800 mAh Embedded USB-C 96% capacity after 300 charge/discharge cycles -20°C to 40°C 10°C to 40°C -10°C to 40°C

ANAFI Ai ­ © Parrot 2021

71

Functions
High Density The batteries’ cells take advantage of the latest technology: high density associated to a high power delivery. The chemistry involved enables the raise of the charging electrical potential up to 4.4 V per cell.
USB 3 USB 3 standards internal electronics open a 600 Mb/s data transfer rate for the retrieval of photos and videos directly from the drone.
Smart battery management system The battery is equipped with a gauge which measures in real time the battery’s main parameters: voltage, current and temperature. Its internal algorithms enable the system to compute available energy in real time, based on these parameters and on the cells’ “state of health”.
The precision of this estimate is the basis of the “Smart RTH” function: the battery provides the information which enable the drone to determine the energy necessary to return to its take-off point safely.
Smart charging With its embedded 45 W charger, ANAFI Ai’s battery can be charged easily and quickly through any USB-C adapter. The battery is 3.0 USB- PD (Power Delivery) compatible. This protocol enables a full charge in 150 minutes.
The charging power is automatically adapted to ambient temperature to avoid damaging the cells.
Wintering After 10 days inactivity (no charge nor flight), the battery automatically discharges itself to 60% charge ­ its optimal storage charge ­ and switches to wintering mode.
This mode guarantees the battery an optimal state of storage: it stops all current leakage by isolating the cells from the electronic circuit. This prevents cells from reaching a tension level which would deteriorate the chemistry of the battery (3 V). Therefore, batteries can be safely stored for a full year.
Batteries exit the wintering mode when plugged to a power source.
Black Box Equipped with an internal memory, the battery saves the history of its measures (charge/discharge cycles, state of health) and transfers them to FreeFlight 7. This data can also be exploited by Parrot Support services for enquiries in case of malfunction.
USB-C “OTG” interface The USB port of the battery can provide 2 A power to an external peripheral as a 4G key, a C02 detector, or any kind of USB-equipped electronic device.
Powerbank The battery can be used as an emergency charger for any kind of electronic device (smartphone, tablet).
Charge indicator The four LED of the battery indicate the battery’s level of charge in the following situation:

72

v1.6

· battery in charge · power button activated · battery installed on ANAFI Ai and drone powered on
The state of charge is an expression of the energy remaining in the battery, by quarters of the total energy the battery can store.
Quality
· Parrot is ISO9001 certified. · Batteries are UN38.3 certified for transportation. · Batteries are CE and FCC certified. · Quality control process:
o Parrot has implemented a reinforced quality control process for battery manufacturing (audit of suppliers, IQC controls) and set up test benches for each step of the assembly process
o Parrot performs random X-ray and tomography controls on samples, to check the quality of the cells (folding, connectors).
· If users choose to share their data with Parrot, their ANAFI Ai batteries’ state of health is transferred to Parrot’s cloud for support and statistics purposes.

ANAFI Ai ­ © Parrot 2021

73

Cybersecurity by design
Key features
· Zero data shared without user consent · FIPS140-2 compliant and CC EAL5+ certified Secure Elements · Strong authentication for 4G · Digitally signed pictures · Transparency and Bug bounty continuous security check
No data shared by default
Parrot collects no data without the consent of the users. The user can decide whether he shares data to Parrot infrastructure or not. Data hosted by Parrot enables the user to synchronize flight data and flight plans between different devices, eases support and allows Parrot to enhance products.
ANAFI Ai is compliant with the European Union General Data Protection Regulation (GDPR) and goes beyond, for example with a 1-Click deletion of all data so that users keep control very easily. It’s a matter of 1-Click in the FreeFlight7 mobile App or in the privacy settings of their Parrot.Cloud account. Thus, users may not only stop sharing data at any time, but they can also ask for data deletion very easily.
When the user consents to share data, data processing is fully transparent and described in the Parrot Privacy Policy.
When ANAFI Ai is connected to the Skycontroller 4 through 4G, Parrot infrastructure is used to pair the drone and the remote controller. If the user is not authenticated by the Parrot.Cloud account, he can still use 4G with a unique temporary account. When using Parrot infrastructure for 4G pairing, video is encrypted with a key negotiated between the drone and remote controller, Parrot has no access to unencrypted videos.
FIPS140-2 compliant and CC EAL5+ certified Secure Element
ANAFI Ai embeds a Wisekey Secure Element which is NIST FIPS140-2 Level 3 compliant and Common Criteria EAL5+ certified. A similar Secure Element is also embedded on the Skycontroller 4.
The Secure Element:
· performs cryptographic operations · stores and protects sensitive information

74

v1.6

It features a ECDSA private key, with P521 domain parameters, unique to each drone. This private key cannot be extracted from the Secure Element. The certificate associated to this key is signed by a certificate authority.
The Secure Element protects the integrity of the embedded software, provides a unique identity to the drone for 4G pairing and strong authentication, and features a unique digital signing of the pictures taken by the drone.

ANAFI Ai ­ © Parrot 2021

75

4G secure pairing and strong authentication
When a user enables 4G communication, the initial Wi-Fi connection is first used for a secure pairing process. During this process, the user securely proves he is connected to a specific drone. Thanks to ANAFI Ai Secure Element, he can do so without configuring any password inside the drone.

Then, Parrot servers register the association between the user and the drone. When Wi-Fi connection between the user and the drone is lost, ANAFI Ai automatically connects in 4G. ANAFI Ai does a strong authentication on Parrot servers, using its private key stored on the Secure Element. Parrot servers look for the associated users and enables pairing between ANAFI Ai and the Skycontroller 4.

With 4G, ANAFI Ai performs a strong authentication to log in to Parrot servers. This strong authentication implies a client certificate and a unique private ANAFI Ai key, stored in the Secure Element.
ANAFI Ai supports TLS, DTLS and SRTP protocols, to protect drone control and video streams to the Skycontroller 4.
Secure initialization and update
The drone’s boot sequence is secured: the system checks that it uses Parrot software, and that this software has not been tampered with. A security check is performed at each initialization. The update service also controls the digital signature of software updates.

76

v1.6

Configuring user keys on the Secure Element
ANAFI Ai users have access to a dedicated operator account of the drone’s Secure Element. This account is used to configure keys specific to the user. Users can configure into the Secure Element the public keys of the flight mission providers they choose to trust. ANAFI Ai will only run flight missions which are digitally signed with these keys. This process prevents an attacker from running malicious flight missions on the drone.
Digitally signed pictures
ANAFI Ai’s Secure Element can digitally sign the pictures taken by the drone. This signature provides a proof that:
· said signed picture has been taken by said drone · neither the picture itself nor its metadata have been tempered with (voluntarily or not) ­
metadata, also known as EXIF and XMP, contain information about the date, time, and location of the picture

In other words, the digital signature secures all data relevant to a picture, including the place where and the time when it was taken, and by which ANAFI Ai drone.
Users as well as partners proposing software solutions exploiting drone photographs can verify the digital signature of ANAFI Ai photos, either using the drone’s certificate, or through a public key directory, provided by Parrot.
Transparency and Bug bounty continuous security check
Whenever possible, Parrot uses standard protocol and file formats. There is no obfuscated code, nor hidden features. It allows the user to understand how Parrot products works and check their security. In addition, OpenFlight – the software used to control the drone – is Open Source: then, the users benefit from full control.
Back in April 2021, Parrot has launched a “Bug Bounty” program together with YesWeHack, the first European crowdsourced security platform. Through this partnership, Parrot benefits from YesWeHack’s vast community of cybersecurity researchers to identify potential vulnerabilities in its drones, mobile applications and WebServices.

ANAFI Ai ­ © Parrot 2021

77

The Bug Bounty program takes place in two phases:
· The private programs initially gives exclusive access to selected security researchers and includes future Parrot drone models. The expertise and diverse skills of the researchers will confirm the high level of security of the products before they are marketed, for the greater benefit of Parrot users’ security and the protection of their data.
· After this first phase in a private Bug Bounty program, and after being commercialized, the products enters a public program. Their security is then scrutinized by the entire YesWeHack community, representing more than 22,000 cybersecurity researchers.

78

v1.6

Data management
Parrot’s cloud enable users whom have chosen to share their data to manage their flight and fleet data, as well as the media taken by their drones.
Data collected
Our cloud collects 4 data types:
· “Static” (product data): · Drone serial number · Battery serial number · Firmware version for drone and battery · Hardware version for drone and battery · Device model · FreeFlight 7 release version
· “Events” · Alerts: battery, autopilot, sensors · Connectivity: connection/disconnection, streaming start, interference alert, weak signal · Camera: streaming statistics, settings changes · Flight: status change (take- off, landing, hovering, etc.), flight mission activation (Flight Plan, Photogrammetry)
· “Contextual images” · Timelapse pictures (one every 2 minutes) · Deep learning (matching more objects, landscapes for flight autonomy improvement, tracking, obstacle avoidance) · Stereo vision (depth map) · Images triggered by event · Start and ending of precise hovering phrases, precise landing · Drone crash · Faces are automatically blurred when transferred

ANAFI Ai ­ © Parrot 2021

79

· “Telemetry” (refer to the following table for details)

System Flight control:
Connectivity:

Description
GPS Position, speed, height, drone’s attitude (phi, theta, psi angles), wind strength estimate Wi-Fi Physical layer: RSSI, PER (packet error rate), throughput estimator, tx/rx bytes tx/rx packets, Wi-Fi and non Wi-Fi interference measurement Streaming RTP statistics Piloting commands (jitter, lost count)1Hz

Sampling 5Hz
1Hz

Camera:

Zoom factor, shutter time, gain, NED reference

1Hz

GNSS:

Number of satellites (in sight, in synch, precision) ­ GPS, Glonass, 1Hz Galileo

Gimbal:

Camera tilt

5Hz

Battery:

Temperature, tension, remaining capacity.

1Hz

Autopilot:

Position, speed, height, drone’s attitude

5Hz

Vertical camera: Exposure time, gain, brightness

1Hz

Controller:

Roll, Pitch, Yaw and Speed commands

1Hz

Zoom command

Gimbal command

Final use of collected data
Parrot collects and exploits only data from customers who have accepted to share their data, and to improve the quality of its products.
Maintenance management
· Preventive maintenance: our tools collect all information relevant to missions (mission type, time of take-off and landing, missions count, drones’ locations, flight speed, Flight Plan and AirSDK settings). This offers us an accurate real time status of ANAFI Ai drones fleet (as well as their controllers and batteries).
· Corrective maintenance: collected information are useful to quickly pinpoint information pertaining the state of a particular drone or battery.
Artificial Intelligence (AI) improvement
AI elements carried by ANAFI Ai (PeleeNet, Convolutional Networks, etc.) offer users unequalled services and performances: obstacle avoidance, target following, several flight modes. The quality of IA relies on the quantity and quality of data (images and videos) collected: this data feeds the machine learning. In that respect, the quality of the data is not the only crucial element: the metadata associated to this data is fundamental too. For this reason, our tool collects images and metadata on a regular basis and depending on events, for a total of 30 to 50 Mb/minute flight.

80

v1.6

Finally, all texts and faces contained on collected images are automatically blurred, once by FreeFlight 7 app before sharing, once at reception with Amazon cloud services. On the cloud, 3 analysis are done :
· Amazon rekognition “detect_faces” · Amazon rekognition “detect_texts ” · Amazon rekognition “detect_labels”
Images, videos, and metadata collection is an essential asset to widen the learning field of our vision algorithms, and to add new and complex situations to enhance the visual autonomy of our drone.

ANAFI Ai ­ © Parrot 2021

81

Pack
ANAFI Ai and its accessories are delivered in a sturdy (fiberglass filled plastic) and compact carrying case. It also includes a slot for the transport of a tablet (up to 8″ size).

Figure: ANAFI Ai pack
Pack contents – ANAFI Ai drone – 1 Parrot Skycontroller 4 – 1 smart battery, preinstalled on ANAFI Ai – 1 hard case – 1 international USB-PD charger – 2 USB-C to USB-C cables (charge and pairing) – 1 USB-C to Lightning cable (connection to iOS devices) – 1 additional set of propeller blades – 1 gimbal protection – Documentation
All components are held together by foam to keep the elements in place during transport.
Charger Parrot provides an international (interchangeable plugs, compatible with the main US/EU/UK standards), twin USB-PD charger It allows you to charge both ANAFI Ai and the Skycontroller 4 batteries in less than 3 hours.

82

v1.6

Appendix: certification documents
As of November 12th, 2021, the following certification documents, pertaining ANAFI Ai, the Parrot Skycontroller 4 and their respective components, are available online ­ right-click on document to open in a new tab or window.
ANAFI Ai and components:
– ANAFI Ai European Union Declaration of Conformity – ANAFI Ai Declaration of FCC & IC Certification Tests:
Digital Transformation System Unlicensed National Information Infrastructure PCS Licensed Transmitter – ANAFI Ai charger EMC (European Union) Certificate of Conformity – ANAFI Ai charger EMC (FCC, USA) Certificate of Conformity – ANAFI Ai battery European Union Declaration of Conformity – ANAFI Ai battery UL Report (Safety) – ANAFI Ai battery CB Test Certificate (Safety Certificate, USA) – ANAFI Ai Machinery Directive Conformity Certificate (European Union) – ANAFI Ai IP5X Product Validation and Robustness Test Report (dust protection) – ANAFI Ai IPX3 Product Validation and Robustness Test Report (water protection) – ANAFI Ai Acoustic Level (81 dBA)
Skycontroller 4 and components:
– Skycontroller 4 European Union Declaration of Conformity – Skycontroller 4 FCC Radio Certification Tests:
Digital Transformation System Unlicensed National Information Infrastructure – Skycontroller 4 USB Certification – Skycontroller 4 HDMI Certification – Skycontroller 4 EMC (Charging: European Union) Certificate of Conformity – Skycontroller 4 EMC (Charging: FCC, USA) Certificate of Conformity – Skycontroller 4 battery European Union Declaration of Conformity – Skycontroller 4 battery UL Report (Safety)

ANAFI Ai ­ © Parrot 2021

83

PARROT DRONES SAS
RCS PARIS 808 408 074 – 174 quai de Jemmapes, 75010 PARIS ­ France
Parrot and the Parrot logo are trademarks of Parrot SA. ANAFI and Parrot Skycontroller are trademarks of Parrot Drones SAS.

References

Read User Manual Online (PDF format)

Read User Manual Online (PDF format)  >>

Download This Manual (PDF format)

Download this manual  >>

Parrot User Manuals

Related Manuals