IoT connectivity solutions: Media access control layer and network topology

161-Datalink-MAC

Media access control layer and network topology

For IoT applications, the main characteristics of the media access layer control (MAC) that need to be considered are multiple access, synchronization, and network topology.

Multiple Access. Looking back at decades of successful cellular system deployment, one can safely conclude that TDMA is a good fit for the IoT. TDMA is suited for low-power operation with a decent number of devices, as it allows for optimal scheduling of inactive periods. Hence, TDMA is selected for multiple access in the MAC layer.

Synchronization. In IoT applications, there are potentially a very large number of power-sensitive devices with moderate throughput requirements. In such a configuration, it is essential to maintain a reasonably consistent time base across the entire network and potentially across different networks. Given that throughput is not the most critical requirement, it is suitable to follow a beacon-enabled approach, with a flexible beacon period to accommodate different types of services.

Network topology. Mobile networks using a cellular topology have efficiently been servicing a large number of devices with a high level of security and reliability, e.g., 5,000+ per base station for LTE in urban areas. This typology is based on a star topology in each cell, while the cells are connected in a hierarchical tree in the network backhaul. This approach is regarded suitable for the IoT and is therefore selected.

The network layer and interface to applications

The network layer (NWK) and the interface to applications are less fundamental as far as power-efficiency and reliability is concerned. In addition, there is more variation in the field of IoT applications. Nevertheless, it is widely acknowledged that IoT applications need to support the Internet Protocol (IP), whether it is IPv4 or IPv6. In addition, the User Datagram Protocol (UDP) and Constrained Application Protocol (CoAP) could provide the relevant trade-off between flexibility and implementation-complexity on resource-constrained devices.

Furthermore, the IoT will represent an immense security challenge, and it is likely that state-of-the-art security features will become necessary. As of today, we can assume 128 bits Advanced Encryption Standard (AES) for encryption and Diffie-Hellman (DH), or the Elliptic Curve Diffie-Hellman (ECDH) variants, can become the baseline for securing communication.

4 Main ‘Must Haves’ for the Physical Layer of Internet of Things Wireless Connectivity

image0011210155736818

Analysis of the physical layer of wireless communication solutions for IoT application.

For IoT applications, the main characteristics of the physical layer that need to be considered are modulation, data rate, transmission mode, and channel encoding.

Modulation. The nature of IoT applications, some involve infrequent data transmission that need low-cost low-complexity devices, preclude the use of high-order modulation or advanced channel coding like trellis-coded modulation. Unless mandatory, due to a harsh radio environment with narrowband interferers or regulatory constraints, spread spectrum, e.g., Direct Sequence Spread Spectrum (DSSS), is to be avoided as it increases the channel bandwidth, requiring a more costly and power-consuming RF frontend, with no data rate improvement. Allowing non-coherent demodulation relaxes the constraint on the device complexity, so (Gaussian) Frequency Shift Keying ((G)FSK) is a proven and suitable choice, similarly as in Bluetooth radio. It is considered that the most sensible choice upon availability would be Gaussian Minimum Shift Keying (GMSK), as the modulation index of ½ allows for lower complexity, or better sensitivity at a given complexity. When available bandwidth is restricted, GFSK with lower modulation index is still appropriate, with the next best being 1/3 as it still allows for near-optimal demodulation at reasonable complexity.

Data rate. IoT applications need to mix very low data rate requirements, e.g., a sensor or an actuator with limited data size either uplink or downlink, with more demanding requirements, e.g., a 6-inch 3-color ePaper display in a home that updates the daily weather forecast or the shopping list, easily amounting to more than 196 kB worth of data. Yet even for small data amounts, a carefully chosen higher data rate actually improves power-consumption thanks to shorter transmission time and reduced probability of collision. Similar reasoning is applied to Bluetooth Low Energy, a.k.a., BLE or Bluetooth Smart, formerly Nokia’s WiBree, which uses 1 Mbps with much lower data throughput. The latter is aimed at proximity communication and its high gross data rate of 1 Mbps sacrifices the range considerably. Even when operating at sub-GHz frequencies, which offer better range than 2.4 GHz for a given transmit power, the 1 Mbps is considered to be the absolute upper limit. On the higher end, the transceiver complexity and power increase do not improve the actual useable throughput, as the overhead of packet acknowledgement and packet processing time become the bottleneck.

On the lower end, data rates below 40 kbps are actually impractical, as it would rule out using standard off-the-shelf 20 parts per million (ppm) crystals. Indeed, the frequency accuracy of these crystals is not sufficient: 20 ppm translates into a 18 kHz frequency error when operating in sub-GHz bands, while it is 48 kHz when operating at 2.4GHz. A narrow channel requires an accurate crystal like temperature-compensated TCXO on both ends, including the client, which is more costly, power-consuming, and bulky [36].The optimal baseline gross data rate is considered to be 500 kbps. Depending on the scale of the network, e.g., home, building, district, or city, the applications, and the number of devices, we expect different trade-offs with actual deployments ranging from 100 kbps to 500 kbps.

Transmission mode. Full duplex communication is challenging, as it requires good isolation and does not allow for resource sharing between transmit and receive. Full duplex also typically involves different frequencies for downlink and uplink. Since the radio resource is a scarce resource, half-duplex is therefore selected, preferably on the same radio channel.

Channel coding. There is the potential for improving link quality and performance with a limited complexity increase by using (adaptive) channel coding together with Automatic Repeat-Request (ARQ) retry mechanism. As of today, this is considered optional due to complexity-cost-performance trade-offs achieved with current technologies. However, provisions have to be made for future implementation. As of today, flexible packet length is considered a sufficient means of adapting to the link quality variations.


Media access control layer and network topology

For IoT applications, the main characteristics of the media access layer control (MAC) that need to be considered are multiple access, synchronization, and network topology.

Multiple Access. Looking back at decades of successful cellular system deployment, one can safely conclude that TDMA is a good fit for the IoT. TDMA is suited for low-power operation with a decent number of devices, as it allows for optimal scheduling of inactive periods. Hence, TDMA is selected for multiple access in the MAC layer.

Synchronization. In IoT applications, there are potentially a very large number of power-sensitive devices with moderate throughput requirements. In such a configuration, it is essential to maintain a reasonably consistent time base across the entire network and potentially across different networks. Given that throughput is not the most critical requirement, it is suitable to follow a beacon-enabled approach, with a flexible beacon period to accommodate different types of services.

Network topology. Mobile networks using a cellular topology have efficiently been servicing a large number of devices with a high level of security and reliability, e.g., 5,000+ per base station for LTE in urban areas. This typology is based on a star topology in each cell, while the cells are connected in a hierarchical tree in the network backhaul. This approach is regarded suitable for the IoT and is therefore selected.

The network layer and interface to applications

The network layer (NWK) and the interface to applications are less fundamental as far as power-efficiency and reliability is concerned. In addition, there is more variation in the field of IoT applications. Nevertheless, it is widely acknowledged that IoT applications need to support the Internet Protocol (IP), whether it is IPv4 or IPv6. In addition, the User Datagram Protocol (UDP) and Constrained Application Protocol (CoAP) could provide the relevant trade-off between flexibility and implementation-complexity on resource-constrained devices.

Furthermore, the IoT will represent an immense security challenge, and it is likely that state-of-the-art security features will become necessary. As of today, we can assume 128 bits Advanced Encryption Standard (AES) for encryption and Diffie-Hellman (DH), or the Elliptic Curve Diffie-Hellman (ECDH) variants, can become the baseline for securing communication.

Internet of Things wireless connectivity option analysis: Z-Wave Pros and Cons

z-wave_logo

As another asynchronous wireless networking protocol, Z-Wave is designed for home automation and remote control applications. Z-Wave originated from the Danish startup Zen-SYS and was acquired by Sigma Designs in 2008. The Z-Wave Alliance was formed in 2005. Unlike most competing technologies as discussed so far, Z-Wave operates in the sub-GHz bands: 868.42 MHz in Europe, 908.42 MHz in the US, 916 MHz in Israel, 919.82 MHz in Hong-Kong, 921.42 MHz in Australia and New Zealand. The use of sub-GHz bands brings improved range, reliability, and less interference in the Z-Wave network. Nevertheless, there are a few issues worth mentioning when applying Z-Wave for the IoT.

Z-Wave offers limited data rates and mediocre spectrum efficiency due to Manchester GFSK coding (invented in 1948) which doubles the used spectrum for limited coding gain. Originally offering a low data rate of 9.6 kbps, Z-Wave has been upgraded to 100 kbps in the latest version. The Z-Wave network is limited to 232 nodes, yet manufacturers recommend no more than 30 to 50 nodes in practical deployments. Moreover, Z-Wave makes use of relays, such as wall-mounted light switches, to forward packets when devices are out-of-range.

Z-Wave uses a Source Routing Algorithm (SRA), meaning that the message initiator has to embed the routing information into the packet. This implies overhead as the route occupies space meant for the actual data payload. More importantly, this means that the initiator needs to be aware of the network topology. The network topology therefore needs to be maintained and distributed to the nodes that may initiate messages. This is a complex task and is typically not manageable by an end device constrained in computing power, code size, battery capacity, and cost. Z-Wave defines different device types with different capabilities and protocol stack sizes:

  • Controllers: have a full and largest protocol stack as they can initiate messages. The master controller, the Static Update Controller, (SUC), maintains the network topology and handles network management.
  • Mobile controllers: can support request for neighbor rediscovery from moving nodes by implementing the portable controller protocol stack.
  • Routing Slaves: depend on SUCs for network topology and can initiate messages to a restricted set of nodes.
  • Slaves: have the smallest protocol stack, can only reply to requests, and cannot initiate messages.

When using multiple controllers in the same network, only the master (SUC) can be used for network maintenance. Whenever a Z-Wave device is added or removed from the network, the network topology of the master controller has to be replicated manually to the secondary controllers. This process makes network maintenance cumbersome.

The Source Routing Algorithm, along with the network topology management, also makes it very difficult to handle mobility. There is some support for nodes to request for neighbors’ rediscovery, however, this is a complicated and power-consuming process. Taken together this does not provide anything near seamless support for mobility. In addition, Z-Wave also has security flaws, as can be seen from reports of successful attacks on Z-Wave devices.

Overall, Z-Wave has been quite successful thanks to the trade-offs it provides. Z-Wave is a lot simpler than ZigBee, yet it provides a sufficient set of basic functions for simple deployments in home or small commercial spaces. Z-Wave has a good market share for the smart home and smart building by proving the benefits of sub-GHz communication. Nevertheless, its limitations as outlined above prevent it from becoming a future-proof technology for upcoming IoT applications.

Internet of Things Wireless Connectivity Option Analysis: Pros and Cons of Bluetooth Classic, Bluetooth Low Energy, and CSRmesh

Nordic-Semiconductor-launches-the-Blue-nRF8002-a-low-cost-ultra-low-power-uniquely-easy-to-design-in-single-chip-solution-for-Bluetooth-Smart-tags-and-accessories

Analysis of the major Bluetooth technologies, including Bluetooth Classic, Bluetooth Low Energy, and CSRmesh as solution for the last 100m of IoT connectivity.

Bluetooth Classic

Bluetooth Classic, also standardized as IEEE 802.15.1 in 2002 and revised in 2005 (although this standard is not maintained anymore), was invented in 1994 as a replacement for RS-232. Bluetooth Classic operates in the 2.4 GHz band and is limited to a small number of eight devices. Because of the following reasons, Bluetooth Classic is not a suitable protocol for IoT applications:

  • Bluetooth Classic was designed to provide low-latency wireless peripherals and has evolved to provide high data rates. This is achieved at the expense of power consumption.
  • The physical layer (PHY) of Bluetooth Classic only supports long packets (up to 2745 bits of payload) with mandatory channel encoding. This enables higher throughput, however, this is not suitable for resource-constrained devices.
  • The protocol stack of Bluetooth Classic has grown in complexity and can typically be 128 kB of code size, which is not satisfactory for IoT embedded devices.
  • Bluetooth Classic’s loose specification on the modulation index range does not make it easy to improve the receiver performance in the future. Consequently, Bluetooth Classic has poor coverage, typically less than 10 m.
  • With a 3-bit address for piconet space, Bluetooth Classic is limited to having a maximum size of 8 connected devices, which is obviously insufficient for IoT applications.

Bluetooth Low Energy (BLE)

BLE also known as Bluetooth v4.0 or Bluetooth Smart originated from Nokia’s WiBree. Contrary to belief, BLE is actually not compatible with Bluetooth Classic since the physical layer (PHY) has been re-designed. BLE is using a fixed data rate of 1 Mbps and GMSK modulation. BLE uses short packets, and is suitable for low-latency proximity communication. Unfortunately, BLE has the following issues that make it less suitable for IoT applications:

  • BLE is operating in the crowded 2.4 GHz frequency band, along with Bluetooth Classic, Wi-Fi, ZigBee, and IEEE 802.15.4. This spectrum crowding will pose a severe reliability challenge to all 2.4 GHz devices, and the problem will only get worse when the number of connected object increases.
  • BLE is optimized for low-latency sporadic transmissions and therefore its efficiency degrades dramatically for larger data transfers. With its maximum of 20 bytes application payload size per packet, the gross 1 Mbps data rate of BLE translates into a theoretical maximum transfer rate of 250 kbps, and in practice the actual transfer rates drops below 100 kbps. This opposed to Bluetooth Classic v1.2 that achieves 700 kbps, and v2.1 + EDR reaches 2 Mbps actual transfer rate. An actual transfer rate of only 1/10 of the gross data rate is rather lackluster and translates into poor power-efficiency for such type of data traffic. Although many IoT applications may have a limited data amount to transfer, e.g., for switching off or changing the color of a light bulb, others would still require slightly larger transfers. As a result, BLE is not suitable for IoT applications that require higher data transfers.
  • BLE has limited range and extending the network therefore requires a hybrid topology where some client nodes act as server nodes for other star networks. In Bluetooth-specific terminology, this is called scatternet, which yields high network complexity in real deployments. For instance, BLE is essentially asynchronous, such that this hybrid topology (mix of star and mesh) causes increased interference and increased power consumption, even inside a single network.
  • Finally, BLE suffers from interference from USB 3.0, and poses a challenge when operating with collocated LTE or WIMAX networks. This is reflected in Bluetooth SIG filtering recommendations. However, workarounds are developed as well.

CSRmesh

In February 2014, CSR plc, formerly Cambridge Silicon Radio, announced the availability of their proprietary CSRmesh software. CSRmesh operates over Bluetooth Low Energy (BLE) with the aim to enable mesh topology over the restrictive BLE scatternet topology and to provide direct communication between BLE devices. However, we want to note the following:

  • The main advantage of CSRmesh is to allow smartphone connectivity. It is still questionable whether this connectivity should be achieved via direct connection to any device or more simply via a gateway or routers, e.g., Wi-Fi or BLE-enabled routers, or even through cellular if a device is out of range.
  • Turning BLE into a mesh-able protocol is not that straightforward. Even if BLE in itself is power-efficient for low duty cycle and small data packets, enabling the mesh functionality would require each device to simultaneously be an observer and broadcaster. This implies that each device would continuously listen for advertising packets, and would then switch to advertising the received data for some period.
  • The inefficient use of the radio resources inherent to continuous receive would make it difficult to achieve ultra-low-power consumption in resource-constrained devices. As reported on CSR Forums, there happened to be a current consumption in idle state of around 3mA, which is 100x more than people would expect for a battery powered IoT device. In short, the asynchronous nature of BLE, optimized for low duty cycle / sporadic transmission, seems to offer a challenge for the implementation of a power efficient mesh topology on top of the exiting BLE protocol stack.
  • Allowing direct smartphone connection to every device may not provide additional functions. On the contrary, as discussed above it will drain the battery of the device. In addition, it is a potential security threat because there is no gateway with sufficient computing power to filter access and enable strong authentication security.

questions / comments? fire away!

Lockitron smart lock – WiFi is clearly not the IoT solution. How about BLE?

bolt-mounted

Lockitron attempted using Wi-Fi as its way of communication for its “smart lock” product but could not deliver a consumer-friendly product due to its poor battery life. Its a good lesson all should learn from: WiFi just does not work for battery powered objects. It is just too power hungry. Now, Lockitron is using BLE.

BUT there is no doubt BLE has its downsides. It is operating in the crowded 2.4 GHz frequency band, along with Bluetooth Classic, Wi-Fi, ZigBee, and IEEE 802.15.4. This spectrum crowding will pose a severe reliability challenge to all 2.4 GHz devices, and the problem will only get worse when the number of connected object increases.

BLE is optimized for low-latency sporadic transmissions and therefore its efficiency degrades dramatically for larger data transfers. With its maximum of 20 bytes application payload size per packet, the gross 1 Mbps data rate of BLE translates into a theoretical maximum transfer rate of 250 kbps, and in practice the actual transfer rates drops below 100 kbps. This opposed to Bluetooth Classic v1.2 that achieves 700 kbps, and v2.1 + EDR reaches 2 Mbps actual transfer rate. An actual transfer rate of only 1/10 of the gross data rate is rather lackluster and translates into poor power-efficiency for such type of data traffic. Although many IoT applications may have a limited data amount to transfer, e.g., for switching off or changing the color of a light bulb, others would still require sizeable transfers. As a result, BLE is not suitable for IoT applications that require higher data transfers.

BLE has limited range and extending the network therefore requires a hybrid topology where some client nodes act as server nodes for other star networks. In Bluetooth-specific terminology, this is called scatternet, which yields high network complexity in real deployments. For instance, BLE is essentially asynchronous, such that this hybrid topology (mix of star and mesh) causes increased interference and increased power consumption, even inside a single network.

Finally, BLE suffers from interference from USB 3.0, and poses a challenge when operating with collocated LTE or WIMAX networks. This is reflected in Bluetooth SIG filtering recommendations. However, workarounds are developed as well.

BLE may be a viable short term solution. But we will see what unfolds if a future of 50 billion objects comes to fruition.