image110

Chihuahua-VR: Media Processing Capacity

Scalability

Chihuahua-VR by definition should be small, portable and cheap device. However in some scenarios it makes sense to keep the room for higher media processing capacity. We have basically 2 options: vertical or horizontal scalability.

The first one is extremely limited due to mobile/portable design of TX1. On the hardware level we can introduce additional GPU, which will consume also a lot of CPU or we can optimise the software stack. There is not much more we can achieve with single TX1 approach.

Horizontal scalability brings much more computing power to the box. It increases the size, complexity and price, however in some scenarios it simply makes sense! There are basically 2 problems to solve:

  • media exchange bus – paraller or serial data exchange between master and slave members in the cluster
  • signaling layer – low level synchornisation mechanism with API for higher level media pipeline application and plugins

Conceptual scaling approach

Horizontal TX1/Chihuahua-VR scalability
Horizontal TX1/Chihuahua-VR scalability

The graph shows Master-Slave architecture with media and signaling layer. In theory the media could be broadcasted to the slave cluster in paraller, but in practice it must be done as serial distribution (each TX1 duplicates video frames and push to another member). In my opinion it will be cheaper for computation and distribution and makes Master/CPU free of internal media streaming load.

Slaves members have the same capacity as master for media processing, encoding and distributing over the network. It would support also clustered WebRTC bridging for streaming.

Conclusion

I would consider scalability topic as highly unwanted in the first version of device. However for more advanced Computer Vision transformation and VR creation it might be necessary. It also makes sense to distribute deep learning, recognition and similar (non-realtime) processes over the cluster.