Developers of the Internet of Things always seems to be moving into the last big thing—at least as far as communications expectations and protocols. Too often security is an afterthought, something that can be bolted on afterward.
I often have to design secure communications for new deployments on a University campus. Many new roll-pits are still using RESTfull JSON. Remote systems often transfer telemetry to the cloud using unencrypted FTP. OpenADR generally uses reverse polling because corporate security won’t let external systems interact with on-premises systems secured with last generation security.
BACnet is moving closer to modern expectations with BACnet/SC. Control nodes and sensors can communicate using TLS-secured messages. Devices within the internal internet can work with certificates issued by the BACnet hub. Legacy systems can hide behind a BACnet hub and act AS IF they were secured.
Even so, older protocols and expectations sink in. BACnet router to BACnet application is still limited to Web Socket. ASHRAE specifies TLS 1.2 when many enterprises have moved to TLS 1.3. It is difficult to match the nimbleness of modern IT systems when putting in place systems that will not be replaced or re-programmed for a couple decades.
(Let me be clear here—my biggest complaint about BACnet SC is that I cannot yet deploy it. It is far more secure, and far better architected than what came before.)
Newer IT expectations are expected to continuously tune themselves based upon actual observed performance within their own environment. Applications that cannot do this on their own will end up sharing their data to cloud AI, with resulting loss of performance and loss of privacy and security. We all should know by now that data that goes to the cloud tends to get free in the cloud, offering the hacker or commercial competitor information for a decade. Once released, privacy never comes back.
Some IoT platform models have moved toward Docker. Docker provides a minimal Linux-like operating system (OS) to deploy code anywhere. I’m afraid that mainline IoT will get to Dockers just as the cloud moves to the next thing. On the edge, with the devices themselves, developer may wish to have multiple operating systems: one for Control, one for User Interface, one for AI. A Docker supporting Python for AI may require a lot of resources. Docker is and will remain to fat resource-demanding to support such applications on the edge.
I recently have seen some movement past Docker to DAPR (the Distributed Application Runtime). One can consider DAPR as a much lighter weight Docker. Different DAPR nodes are optimized for different languages. For example, there is a DAPR node pre-adapted to run the GO language (GOLANG or simply GO). GO is ideally suited to develop tiny replacements for Python AI routines. A GOLANG DAPR node can be much smaller and more efficient than is a Python routine on a Docker. Three DAPR nodes, one for control, one for AI based on GO, and one for UI based on .NET core can fit on a thermostat or other small system.
Upgrading some part of such a system, say upgrading the AI, could be as simple as swapping out the single DAPR node without touching the rest.
Don’t be slow to the last big thing. I recommend that smart building developers and smart energy developers consider what they might do with DAPR today.