Share this

Four computing modes of the Internet of Things

2026-04-06 05:56:38 · · #1

From the perspective of IoT practitioners, there is a frequent need for more available and distributed computing. When starting to integrate IoT with OT and IT systems, the first challenge is the massive amount of data sent from devices to servers. In a factory automation scenario, there might be hundreds of integrated sensors, each sending three data points per second. Most of this sensor data becomes completely useless after five seconds. Hundreds of sensors, multiple gateways, multiple processes, and multiple systems need to process this data almost instantaneously.

Most proponents of data processing support the cloud model, which posits that something should always be sent to the cloud. This was also the foundation of the first Internet of Things (IoT) computing.

1. Cloud computing for the Internet of Things

Through the Internet of Things and cloud computing model, you essentially drive and process your sensory data in the cloud. You have an ingestion module that receives data and stores it in a data lake (a very large storage), then it is processed in parallel (it could be Spark, Azure HDInsight, Hive, etc.), and then decisions are made using the fast-paced information.

Since the beginning of building IoT solutions, there are now many new products and services that make this very easy:

You can use AWS Kinesis and Bigdata Lambda Services

Leveraging the Azure ecosystem makes building big data capabilities extremely easy.

Alternatively, you can use tools like Google Cloud products such as CloudIoTCore.

Some of the challenges faced in the Internet of Things are:

Users and businesses on private platforms feel uncomfortable having their data owned by companies like Google, Microsoft, and Amazon.

Latency and network outage issues

Increased storage costs, data security, and durability.

Typically, big data frameworks are insufficient to create a large ingestion module that can meet data requirements.

2. Fog Computing for the Internet of Things: Fog computing can become more powerful. Instead of sending data all the way to the cloud and waiting for servers to process and respond, fog computing uses local processing units or computers.

Four to five years ago, wireless solutions like Sigfox and LoraWAN didn't exist, and BLE lacked mesh or remote capabilities. Therefore, more expensive network solutions had to be used to ensure a secure, persistent connection to the data processing unit. This central unit was the heart of the solution, and there were few dedicated solution providers for it.

By implementing a fog network, we can learn that:

This isn't simple; it requires knowing and understanding many things. Building software, or what's done on the Internet of Things, is more direct and open. And when you treat the network as a barrier, it slows things down.

Such an implementation requires a very large team and multiple suppliers. Supplier lock-in is also a common challenge.

OpenFog is an open fog computing framework developed by renowned industry professionals and designed specifically for fog computing architectures. It provides use cases, testbeds, technical specifications, and a reference architecture.

3. IoT Edge Computing: The Internet of Things (IoT) is about capturing minute interactions and reacting as quickly as possible. Edge computing, being closest to the data source, enables the application of machine learning within the sensor area. If you're getting bogged down in the discussion of edge and fog computing, it's important to understand that edge computing is all about applications with intelligent sensor nodes, while fog computing remains about local area networks, providing computing power for operations with large volumes of data.

Industry giants like Microsoft and Amazon have launched Azure IoT Edge and AWS GreenGas to enhance machine intelligence on IoT gateways and sensor nodes with robust computing capabilities. While these are excellent solutions that simplify tasks, they significantly change the meaning of edge computing as understood and used by practitioners.

Edge computing should not require machine learning algorithms to run on gateways to build intelligence. In 2015, Alex discussed embedded artificial intelligence on neural memory processors at the ECI conference:

True edge computing will occur on neural devices that come pre-loaded with machine learning algorithms, serving a single purpose and responsibility. Wouldn't that be amazing? Let's assume the end node of a repository can perform native NLP on a small number of key strings that form a password, like "open sesame"!

These edge devices typically have a neural network-like structure, so when a machine learning algorithm is loaded, it essentially burns a neural network inside. However, this burning is permanent and irreversible.

There is a completely new space for embedded devices that can facilitate embedded edge intelligence on low-power sensor nodes.

4. MIST computing in the Internet of Things (IoT) can do the following to facilitate data processing and intelligence in the IoT:

Cloud computing-based models

Fog-based computational models

Edge computing model

Here's a type of computing that complements fog and edge computing, making them even better, without having to wait another year. It simply incorporates the networking capabilities of IoT devices, distributing workloads without the dynamic intelligence models offered by either fog or edge computing.

Establishing this model can enable devices with high-speed data processing and intelligent extraction, featuring 256kb of memory and a data transfer rate of ~100kb/s. For Mesh networks, we will certainly see promoters of such a computing model emerge, and someone will propose a better model based on the MIST system that can be easily used.

Read next

CATDOLL 136CM Sasha (TPE Body with Hard Silicone Head)

Height: 136cm Weight: 23.3kg Shoulder Width: 31cm Bust/Waist/Hip: 60/54/68cm Oral Depth: 3-5cm Vaginal Depth: 3-15cm An...

Articles 2026-02-22