Both Google and NVIDIA released development board targeted towards Edge AI to attract developers, makers and hobbyists. But which one do you prefer to?

Google Coral Dev Board, which uses the best of Google's machine learning tools to make AI more accessible. The board boasts a removable system-on-module (SOM) featuring the Edge TPU and looks a lot like a Raspberry Pi.

None
Coral Dev Board: https://store.gravitylink.com/global
None
Coral USB Accelerator: https://store.gravitylink.com/global

Jetson Nano is a new development board from Nvidia that targeted towards AI and machine learning. It comes with a GPU with 128 CUDA cores and a bunch of software and examples pre-installed to get you started.

None
Jetson Nano

Raspberry Pi 4 is the latest product in the popular Raspberry Pi range of computers. It comes with up to 4GB of RAM (four times that of any previous Pi), a faster CPU and GPU, faster Ethernet, dual-band Wi-Fi, twice the amount of HDMI outputs, and two USB 3 ports.

None
Raspberry Pi 4

So, which one do you prefer to?

we collect opinions from the community and let's see what they say.

Google Coral is limited to Tensorflow lite IIRC. While the Jetson supports Pytorch as well. To me that makes the Jetson the preferred option as I'm more familiar with pytorch. However quantization and pruning support is way better on tensorflow (for now).

The Raspberry Pi 4 is really not comparable with the other two as it does not have a GPU or TPU.—— Mxbonn

I would say it depends on the application. What DL model? Inference only or training too? How many inferences per seconds do you expect? My opinion:

Compute power/watt: Google coral > Jetson > Raspberry

Software ecosystem (i.e. framework/additional hardware support, etc.): Google coral < Jetson < Raspberry

—— yusuf-bengio

Here's my take:

Best flexibility: Jetson Nano

Upside: Good performance and runs anything you can run on your computer (that fits in 4GB RAM)

Downside: 4-year old SOC, decent all-round performance but not the most efficient

Bonus: Great software/library support, comes with heatsink etc.

Best perf/watt: Coral DevBoard (also most expensive)

Upside: newest chip, most efficient

Downside: Locked into TF Lite, you're at the mercy of what they support

Bonus: Probably the best choice (perf/watt) if you know what you're doing, sometimes faster than Jetson Nano

Cheapest: Raspberry Pi 4

Upside: Cheap, good tutorials for most stuff, probably can run any models on CPU

Downside: CPU-only, my estimation (extrapolated from RPi 3) is you can get 3–5 FPS on normal TF + MobileNet. Definitely needs extra heatsink + fan for running sustained inference.

Bonus: It's a Raspberry Pi, I love these boards for some reason. You can pair it later with a USB accelerator if you wish. Also you can overclock it for lulz. ——tlkh

Lastly I tried to answer the same question, trying to build the platform for RC-Cars. And my conclusions (similar to others are):

If you want to run model supported by Coral (so CNN, no RNN), RPi4 + Coral is best option. It is super fast compared to Jetson Nano

But if you want to run RNN model, you need Jetson Nano which have moderate speed.

The best would be if Coral support RNN models, that would be awesome. From my perspective, Autonomous RC-CAR need RNN models so I decided to go with Nano. —— melgor89

Have you considered a RPi 4 + Coral USB Accelerator? It may be a good combination. But to give my own opinion: Google Coral — wouldn't suggest. I would prefer a RPi 4 + USB accelerator because of the software ecosystem. The disadvantage is the support for tensorflow lite only. RPi 4 by itself — not powerful enough. Jetson — a good compromise. Supports more libraries, more powerful than RPi 4 by itself.—— vladfedchenko

My bet for on-edge inference is on tpu (coral) or cpu-only (Raspberry) solutions.

For Coral: TPU provides a very attractive performance per watt. For many lightweight inference tasks (e.g. face detection, segmentation, object detection) coral would be the best solution. Google provides support for both modeling using [tf-lite] and inference using [mediapipe]. You can train, optimize and deploy an entire system in a very short period of time and expect production quality.

The other end of the spectrum, is the CPU-only inference which I'm very excited about. For cpu, you need highly quantized, specialized models. Several startups (e.g. xnor.ai) are working on this, but they want to sell you the model, not the hardware. If the software toolkit becomes commoditized then these solutions will become very popular.

The biggest problem with Jetson is that it is not developed end-to-end by the same company. Facebook does not care about embedded systems, nor they want to help Nvidia sell GPUs. So Jetson is always a second-citizen in Pytorch community. On top of that, using GPUs in embedded systems, unless you have a very specialized use-case, is a bad choice. —— adelope