The SeeWare Platform Edge-to-Cloud Architecture
With SeeWare you can build distributed machine learning applications that run across multiple end-point devices, with results aggregated into the cloud for further processing.

Running Distributed AI Edge to Cloud
SeeWare allows you to put machine learning where it matters most, and where it can run in the most efficient way. For detection and recognition, there are key advantages to running machine learning models on suitably equipped edge gateways.
Better latency
Running machine learning algorithms at the edge avoids the need for a round-trip to the cloud and back. Video no longer needs to be uploaded to the cloud for processing with the results – if required – sent back to the edge. Processing the machine learning algorithms locally significantly improves overall responsiveness by reducing system latency. This allows for feedback in real time directly to the user.
Lower cloud & bandwidth costs
Bandwidth costs for streaming media to the cloud can mount up, so if you can process the media locally and only send detection metadata to the cloud there can be significant savings. Likewise with cloud compute costs, which can also be significant, particularly when running machine learning workloads. Our technology allows sophisticated machine learning models to run on relatively modest hardware specifications, removing the need for expensive GPUs in the cloud.
Better overall privacy
The privacy and security of personally recognisable data such as video is a major concern for vision-based systems. Keeping all media local to the camera can help with privacy and other data protection issues. When designing SeeWare we took into account privacy concerns from the start, and it remains fundamental to the design. Key to that is removing the need to transmit or store any video data.