LoRaWAN vs. Sigfox vs. Weightless-P: Simulation Results in the “Real World”

In wireless communication, the Hata Model for urban areas, also known as the Okumura–Hata model for being a developed version of the Okumura model, is the most widely used radio frequency propagation model for predicting the behaviour of cellular transmissions in built up areas. This model incorporates the graphical information from Okumura model and develops it further to realize the effects of diffraction, reflection and scattering caused by city structures. This model also has two more varieties for transmission in suburban areas and open areas. (source: Wikipedia)

The Hata Model simulation was conducted for Sigfox, LORA, and Weightless-P with the base station height set at 30m and the end devices heights set at 0.5m. The following simulation was conducted at Ubiik (hardware developers for Weightless-P) but we have checked their math and our team has confirmed the numbers are accurate and unbiased.

Let’s first take a look at the U.S Results (902-928MHz)US compaire.png

 

US2 9.54.52 AM.pngUS3.pngUS 1.png

Now let’s take a look at the results in Europe (863-870MHz). The only difference is LORA is only able to use a smaller bandwidth.

EUR compaire.pngEUR1.pngEUR 2.pngEUR 3.png

 

Let’s see what these numbers mean for an actual Smart Metering deployment (click here)

(If you would like to contribute/make edits/suggestions please contact us at techgu.rooh@gmail.com)

sources: (http://www.ubiik.com/lpwan-comparisons)

NB-IOT vs unlicensed LPWAN

One of 3GPP’s chief low-power, wide-area (LPWA) technologies under development is NB-IOT (narrowband IOT) . Many have been speculating over the differences between NB-IOT and the current LPWAN technologies in the unlicensed frequencies such as LORA, Weightless-P, Sigfox, RPMA. Some individuals have even gone as far as saying NB-IOT will be the death of LPWAN technologies. But that is likely not going to be the case as there will always be a huge difference in use-cases of licensed and unlicensed technologies. The best analogy is WiFi (unlicensed) vs 4G (licensed). The business models and use-cases built around WiFI and 4G are “night and day” .

NB-IOT may not be as robust as we are expecting it be. Check out the following features that are likely to be a slight let-down to NB-IoT enthusiasts

1.No full acknowledgement: By design (found in 3GPP Specification TR45.820) NB-IOT is planned to only acknowledge 50% of messages serviced by the wireless technology. This is due to limited downlink capacity. Unlicensed technologies like Weightless-P allows 100% full acknowledgement of every message. If every message is of high value, you will need to know if your messages are successfully sent/received via an acknowledgement.

 

2. Long Latency: Transmit packet aggregation from buffering of messages and data. NB IOT will not be able to support “real time” responses therefore not suitable for time sensitive applications.

3. IoT devices in the network will not be the priority. The licensed spectrum is EXPENSIVE. Ingenu mentioned “$4.6 billion in a recent auction for only 20 MHz of spectrum!” IoT traffic will always come second to high profit margin, cellular traffic.

4. Long battery Life? The actual battery life will remain unknown until the Cellular LPWA networks are commercially available.

5. Availability: NB IOT is a technology that will be ready a few years down the line.

6. Compatibility: NB-IOT will differ across regions and carriers. Huawei initially pushed for a clean slate NB-IOT technology that would not be backwards compatible with 4G etc. This actually makes a lot of sense as it would be eliminating a lot of the unnecessary overhead.  But just as Huawei began making progress, Nokia and Ericsson began insisting on building upon the frameworks of LTE which means significantly more complexity and unnecessary overhead. Not a very nice foundation for such a huge project.

 

IoT connectivity solutions: Media access control layer and network topology

161-Datalink-MAC

Media access control layer and network topology

For IoT applications, the main characteristics of the media access layer control (MAC) that need to be considered are multiple access, synchronization, and network topology.

Multiple Access. Looking back at decades of successful cellular system deployment, one can safely conclude that TDMA is a good fit for the IoT. TDMA is suited for low-power operation with a decent number of devices, as it allows for optimal scheduling of inactive periods. Hence, TDMA is selected for multiple access in the MAC layer.

Synchronization. In IoT applications, there are potentially a very large number of power-sensitive devices with moderate throughput requirements. In such a configuration, it is essential to maintain a reasonably consistent time base across the entire network and potentially across different networks. Given that throughput is not the most critical requirement, it is suitable to follow a beacon-enabled approach, with a flexible beacon period to accommodate different types of services.

Network topology. Mobile networks using a cellular topology have efficiently been servicing a large number of devices with a high level of security and reliability, e.g., 5,000+ per base station for LTE in urban areas. This typology is based on a star topology in each cell, while the cells are connected in a hierarchical tree in the network backhaul. This approach is regarded suitable for the IoT and is therefore selected.

The network layer and interface to applications

The network layer (NWK) and the interface to applications are less fundamental as far as power-efficiency and reliability is concerned. In addition, there is more variation in the field of IoT applications. Nevertheless, it is widely acknowledged that IoT applications need to support the Internet Protocol (IP), whether it is IPv4 or IPv6. In addition, the User Datagram Protocol (UDP) and Constrained Application Protocol (CoAP) could provide the relevant trade-off between flexibility and implementation-complexity on resource-constrained devices.

Furthermore, the IoT will represent an immense security challenge, and it is likely that state-of-the-art security features will become necessary. As of today, we can assume 128 bits Advanced Encryption Standard (AES) for encryption and Diffie-Hellman (DH), or the Elliptic Curve Diffie-Hellman (ECDH) variants, can become the baseline for securing communication.

4 Main ‘Must Haves’ for the Physical Layer of Internet of Things Wireless Connectivity

image0011210155736818

Analysis of the physical layer of wireless communication solutions for IoT application.

For IoT applications, the main characteristics of the physical layer that need to be considered are modulation, data rate, transmission mode, and channel encoding.

Modulation. The nature of IoT applications, some involve infrequent data transmission that need low-cost low-complexity devices, preclude the use of high-order modulation or advanced channel coding like trellis-coded modulation. Unless mandatory, due to a harsh radio environment with narrowband interferers or regulatory constraints, spread spectrum, e.g., Direct Sequence Spread Spectrum (DSSS), is to be avoided as it increases the channel bandwidth, requiring a more costly and power-consuming RF frontend, with no data rate improvement. Allowing non-coherent demodulation relaxes the constraint on the device complexity, so (Gaussian) Frequency Shift Keying ((G)FSK) is a proven and suitable choice, similarly as in Bluetooth radio. It is considered that the most sensible choice upon availability would be Gaussian Minimum Shift Keying (GMSK), as the modulation index of ½ allows for lower complexity, or better sensitivity at a given complexity. When available bandwidth is restricted, GFSK with lower modulation index is still appropriate, with the next best being 1/3 as it still allows for near-optimal demodulation at reasonable complexity.

Data rate. IoT applications need to mix very low data rate requirements, e.g., a sensor or an actuator with limited data size either uplink or downlink, with more demanding requirements, e.g., a 6-inch 3-color ePaper display in a home that updates the daily weather forecast or the shopping list, easily amounting to more than 196 kB worth of data. Yet even for small data amounts, a carefully chosen higher data rate actually improves power-consumption thanks to shorter transmission time and reduced probability of collision. Similar reasoning is applied to Bluetooth Low Energy, a.k.a., BLE or Bluetooth Smart, formerly Nokia’s WiBree, which uses 1 Mbps with much lower data throughput. The latter is aimed at proximity communication and its high gross data rate of 1 Mbps sacrifices the range considerably. Even when operating at sub-GHz frequencies, which offer better range than 2.4 GHz for a given transmit power, the 1 Mbps is considered to be the absolute upper limit. On the higher end, the transceiver complexity and power increase do not improve the actual useable throughput, as the overhead of packet acknowledgement and packet processing time become the bottleneck.

On the lower end, data rates below 40 kbps are actually impractical, as it would rule out using standard off-the-shelf 20 parts per million (ppm) crystals. Indeed, the frequency accuracy of these crystals is not sufficient: 20 ppm translates into a 18 kHz frequency error when operating in sub-GHz bands, while it is 48 kHz when operating at 2.4GHz. A narrow channel requires an accurate crystal like temperature-compensated TCXO on both ends, including the client, which is more costly, power-consuming, and bulky [36].The optimal baseline gross data rate is considered to be 500 kbps. Depending on the scale of the network, e.g., home, building, district, or city, the applications, and the number of devices, we expect different trade-offs with actual deployments ranging from 100 kbps to 500 kbps.

Transmission mode. Full duplex communication is challenging, as it requires good isolation and does not allow for resource sharing between transmit and receive. Full duplex also typically involves different frequencies for downlink and uplink. Since the radio resource is a scarce resource, half-duplex is therefore selected, preferably on the same radio channel.

Channel coding. There is the potential for improving link quality and performance with a limited complexity increase by using (adaptive) channel coding together with Automatic Repeat-Request (ARQ) retry mechanism. As of today, this is considered optional due to complexity-cost-performance trade-offs achieved with current technologies. However, provisions have to be made for future implementation. As of today, flexible packet length is considered a sufficient means of adapting to the link quality variations.


Media access control layer and network topology

For IoT applications, the main characteristics of the media access layer control (MAC) that need to be considered are multiple access, synchronization, and network topology.

Multiple Access. Looking back at decades of successful cellular system deployment, one can safely conclude that TDMA is a good fit for the IoT. TDMA is suited for low-power operation with a decent number of devices, as it allows for optimal scheduling of inactive periods. Hence, TDMA is selected for multiple access in the MAC layer.

Synchronization. In IoT applications, there are potentially a very large number of power-sensitive devices with moderate throughput requirements. In such a configuration, it is essential to maintain a reasonably consistent time base across the entire network and potentially across different networks. Given that throughput is not the most critical requirement, it is suitable to follow a beacon-enabled approach, with a flexible beacon period to accommodate different types of services.

Network topology. Mobile networks using a cellular topology have efficiently been servicing a large number of devices with a high level of security and reliability, e.g., 5,000+ per base station for LTE in urban areas. This typology is based on a star topology in each cell, while the cells are connected in a hierarchical tree in the network backhaul. This approach is regarded suitable for the IoT and is therefore selected.

The network layer and interface to applications

The network layer (NWK) and the interface to applications are less fundamental as far as power-efficiency and reliability is concerned. In addition, there is more variation in the field of IoT applications. Nevertheless, it is widely acknowledged that IoT applications need to support the Internet Protocol (IP), whether it is IPv4 or IPv6. In addition, the User Datagram Protocol (UDP) and Constrained Application Protocol (CoAP) could provide the relevant trade-off between flexibility and implementation-complexity on resource-constrained devices.

Furthermore, the IoT will represent an immense security challenge, and it is likely that state-of-the-art security features will become necessary. As of today, we can assume 128 bits Advanced Encryption Standard (AES) for encryption and Diffie-Hellman (DH), or the Elliptic Curve Diffie-Hellman (ECDH) variants, can become the baseline for securing communication.

Internet of Things Connectivity Option: Cellular Network Technologies

400px-Frequency_reuse.svg

Review of Existing Cellular Network Technologies: The Pros and Cons

With all the shortcomings in the incumbent technologies discussed above, one would be surprised by the absence of the most widely used and proven communication technologies by far: cellular system. Indeed, current cellular technologies manage to fulfill some of the requirements for ‘good’ IoT networks, most notably the coexistence of many simultaneously connected devices, absence of interference, high reliability, long range, and capable to service both low-data rate latency-sensitive and high-data rate applications on the same infrastructure. However, current cellular technologies have characteristics that rule them out for most of the emerging IoT applications. This section presents the review of the most prominent existing cellular technologies.

  • 2G (GSM / GPRS / EDGE): 2G is power efficient thanks to its Time Division Multiple Access (TDMA) nature and narrowband 200 kHz channel bandwidth, relatively low-cost, and very long range especially in its 900 MHz band. 2G is not actively maintained and developed anymore, and there should be the possibility of re-farming or even re-auctioning the frequency bands, potentially for IoT technologies.
  • 3G (UMTS / WCDMA / HSPA): 3G is power hungry by design due to continuous and simultaneous (full duplex) receive and transmit using Code Division Multiple Access (CDMA) that has proven to be less power-efficient than TDMA, wide 5MHz channel bandwidth to achieve high data rates (Wideband CDMA), and high complexity especially for dual-mode 2G/3G. WCDMA is not quite suitable for IoT. Even for cellular, WCDMA has evolved back from CDMA to time-slotted High Speed Packet Access (HSPA) for higher data rates, and even pre-allocated timeslots for lower power consumption in HSPA+. In addition, its Frequency Duplex means it has dedicated spectrum for uplink and downlink, such that it is best suitable for symmetric traffic, which is not typical for IoT clients. It is well-known that the battery-life is characteristically shorter when operating in 3G mode compared to 2G mode, either in idle state or during a low data rate, around 12 kbps, voice call.
  • 3G (CDMA2000 1xRTT, 1x EV-DO (Evolution-Data Only)): As an evolution from the first CDMA technology IS-95/cdmaOne developed by Qualcomm shares most of the fundamental characteristics with WCDMA, although with a narrower channel bandwidth of 1.25 MHz.
  • Chinese 3G (UMTS-TDD, TD-SCDMA): Time Division Synchronous Code Division Multiple Access (TD-SCDMA) was developed in the People’s Republic of China by the Chinese Academy of Telecommunications Technology, Datang Telecom, and Siemens AG, primarily as a way to avoid patent and license fees associated with other 3G technologies. As a late coming 3G technology with a single license granted to China Mobile and deployment only starting in 2009, TD-SCDMA is not widely adopted, and will most likely never be (as it will be deprecated by LTE deployments). TD-SCDMA differs from WCDMA in the following ways. First, TD-SCDMA relies on Time Division Synchronous CDMA with 1.6 MHz channel bandwidth (1.28 Mcps). Second, TD-SCDMA uses Time Duplex with dedicated uplink and downlink time-slots. Third, TD-SCDMA uses a narrower channel bandwidth. Fourth, TD-SCDMA has a synchronous network as all base stations sharing a time base. Fifth, TD-SCDMA provides lower data rates than WCDMA, but its time-slotted nature provides better power-efficiency, along with less complexity. Sixth, TD-SCDMA can outperform GSM battery-life in idle state, and can perform similarly in voice call, which is significantly better than WCDMA. Finally, as opposed to WCDMA, TD-SCDMA requires neither continuous nor simultaneous transmit and receive, allowing for simpler system design and lower hardware complexity / cost. These differences actually make TD-SCDMA more suitable than WCDMA for asymmetric traffic and dense/urban areas. Although TD-SCDMA is still too power-hungry to cover the most constrained IoT use cases, it could be considered the most suitable existing cellular technology for IoT.
  • 4G (LTE): 4G is more power-efficient than 3G, has reduced complexity thanks to its data-only architecture (no voice support), and its limited backward compatibility with 2G/3G. It uses Orthogonal Frequency Division Multiple Access (OFDMA) physical layer in a wide channel bandwidth, typically 20 MHz, for delivering high data rates, 150 Mbps and more with MIMO. Interestingly, the requirements for the IoT have been acknowledged and some standardization efforts are aimed at Machine-to-Machine (M2M) lower-complexity and lower-cost. Most notably LTE Release 12 Cat-0 introduces Machine-Type Communication (MTC), which allows for a narrower 1.4 MHz channel bandwidth and lower peak data rate of 1 Mbps with extended sleep modes for lower-power. Release 13 is studying the feasibility of reducing the channel bandwidth further down to 200 kHz with peak data rate down to 200 kbps, with operation in more sub-GHz frequency bands. Release 12 is foreseen to be commercially available in 2017, and Release 13 in 2018 or later [31].

One of the main drawbacks of cellular is the battery consumption and the cost of the hardware. The closest cellular solution to IoT is the Intel XMM 6255 3G Modem, the self-proclaimed world’s smallest 3G modem. The Intel XMM 6255 3G Modem is claiming an area of 300 mm² in 40 nm process (high density at the expense of higher cost and higher leakage, i.e. power consumption in sleep). Power consumption figures are 65 uA when powered off, 900 uA in both 2G / 3G idle state (with unspecified sleep cycle duration) and 580 mA in HSDPA transfer state, with a supply voltage of 3.3-4.4V (nominal 3.8V) . As a matter of comparison, a typical IEEE 802.15.4 / ZigBee SoC using 180 nm process comes in a 7 x 7 mm (49 mm²) QFN40 package with a sleep current below 5 uA and active receive / transmit under 30 mA, with a supply voltage between 2 V and 3.3 V. When normalizing to the same process, there is a 100-fold increase in area from ZigBee to cellular, which relates to the complexity of the receiver and protocol, and translates into a much higher cost and power consumption. This underlines that, although cellular-type protocols could be very suitable for IoT, existing cellular technologies are way too cumbersome and are overkill.

Another drawback of existing cellular technologies is that they operate on licensed frequency bands. This means that a licensee holder needs to manage the radio resource, e.g., a network operator that charges users high rates in order to pay for the expensive spectrum licenses. With the rise of IoT in the coming years, however, we cannot assume that the network operators will stand still. In addition, the regulatory bodies might re-assess the regulatory framework of frequency allocations.

In short, existing cellular network technologies have many characteristics that make them suitable for IoT applications. However, they suffer from the drawback of putting too much pressure on the power consumption of resource-constrained devices. In addition, they operate on scarce and expensive frequency bands. The next section presents a detailed discussion that leverages the beneficial characteristics and addresses the drawbacks of cellular technologies to define the design requirements that make cellular suitable for IoT applications.

Internet of Things wireless connectivity option analysis: Z-Wave Pros and Cons

z-wave_logo

As another asynchronous wireless networking protocol, Z-Wave is designed for home automation and remote control applications. Z-Wave originated from the Danish startup Zen-SYS and was acquired by Sigma Designs in 2008. The Z-Wave Alliance was formed in 2005. Unlike most competing technologies as discussed so far, Z-Wave operates in the sub-GHz bands: 868.42 MHz in Europe, 908.42 MHz in the US, 916 MHz in Israel, 919.82 MHz in Hong-Kong, 921.42 MHz in Australia and New Zealand. The use of sub-GHz bands brings improved range, reliability, and less interference in the Z-Wave network. Nevertheless, there are a few issues worth mentioning when applying Z-Wave for the IoT.

Z-Wave offers limited data rates and mediocre spectrum efficiency due to Manchester GFSK coding (invented in 1948) which doubles the used spectrum for limited coding gain. Originally offering a low data rate of 9.6 kbps, Z-Wave has been upgraded to 100 kbps in the latest version. The Z-Wave network is limited to 232 nodes, yet manufacturers recommend no more than 30 to 50 nodes in practical deployments. Moreover, Z-Wave makes use of relays, such as wall-mounted light switches, to forward packets when devices are out-of-range.

Z-Wave uses a Source Routing Algorithm (SRA), meaning that the message initiator has to embed the routing information into the packet. This implies overhead as the route occupies space meant for the actual data payload. More importantly, this means that the initiator needs to be aware of the network topology. The network topology therefore needs to be maintained and distributed to the nodes that may initiate messages. This is a complex task and is typically not manageable by an end device constrained in computing power, code size, battery capacity, and cost. Z-Wave defines different device types with different capabilities and protocol stack sizes:

  • Controllers: have a full and largest protocol stack as they can initiate messages. The master controller, the Static Update Controller, (SUC), maintains the network topology and handles network management.
  • Mobile controllers: can support request for neighbor rediscovery from moving nodes by implementing the portable controller protocol stack.
  • Routing Slaves: depend on SUCs for network topology and can initiate messages to a restricted set of nodes.
  • Slaves: have the smallest protocol stack, can only reply to requests, and cannot initiate messages.

When using multiple controllers in the same network, only the master (SUC) can be used for network maintenance. Whenever a Z-Wave device is added or removed from the network, the network topology of the master controller has to be replicated manually to the secondary controllers. This process makes network maintenance cumbersome.

The Source Routing Algorithm, along with the network topology management, also makes it very difficult to handle mobility. There is some support for nodes to request for neighbors’ rediscovery, however, this is a complicated and power-consuming process. Taken together this does not provide anything near seamless support for mobility. In addition, Z-Wave also has security flaws, as can be seen from reports of successful attacks on Z-Wave devices.

Overall, Z-Wave has been quite successful thanks to the trade-offs it provides. Z-Wave is a lot simpler than ZigBee, yet it provides a sufficient set of basic functions for simple deployments in home or small commercial spaces. Z-Wave has a good market share for the smart home and smart building by proving the benefits of sub-GHz communication. Nevertheless, its limitations as outlined above prevent it from becoming a future-proof technology for upcoming IoT applications.

Internet of Things Connectivity Option Analysis: IEEE 802.15.4 technologies

Figure1_revised

Originally released in 2003, IEEE 802.15.4 defines a physical layer (PHY) and media access control layer (MAC) on top of which others can build different network and application layers. The most well-known are ZigBee and 6LoWPAN. IEEE 802.15.4 defines operation in the 2.4 GHz band using DDSS to alleviate narrowband interferences, realizing a data rate of 250 kbps. However, IEEE 802.15.4 has a chip rate of 2 Mbps due to spreading. IEEE 802.15.4 also defines operation in sub-GHz bands, but has failed to take full advantage of these frequency bands: IEEE 802.15.4 specification only defines very low GFSK data rates, 20 kbps and 40 kbps, in these sub-GHz bands, and only allows a single channel in the European 868 MHz band (868.0 -868.6 MHz). These restrictions make the 2.4 GHz variants of IEEE 802.15.4 more attractive, accounting for their wider adoption to date.

IEEE 802.15.4g amendment entitled “Amendment 3: Physical Layer (PHY) Specifications for Low-Data-Rate, Wireless, Smart Metering Utility Networks”, was approved in March 2012. IEEE 802.15.4g improves on the low data rates by enabling the usage of more sub-GHz frequency bands, e.g. 169.4-169.475 MHz and 863-870 MHz in Europe, 450-470 MHz in the US, 470-510 MHz and 779-787 MHz in China, and 917-923.5MHz in Korea. In addition, IEEE 802.15.4g introduces Multi Rate FSK (MR-FSK), OQPSK, and OFDM physical layers, all applicable to these sub-GHz bands. Nevertheless, MR-FSK can still only achieve 200 kbps in the 863-870 MHz band in Europe by using Filtered 4FSK modulation. Higher data rates require MR-OFDM, which may prove inappropriately complex for low-cost and low-power devices. With these new physical layers also come additional complexity from the support of more advanced Forward Error Correction (FEC) schemes, and backward compatibility hassle, as supporting the previous FSK and OQPSK physical layers is mandated. Despite the sensible technical considerations that are generally well-suited for powered device such as smart grid and utilities, there is limited availability of 802.15.4g-enabled chipsets. Consequently, IEEE 802.15.4g will take some time for IEEE 802.15.4g to evolve and to grow before it can be proven as a viable option for the IoT.

The most common flavor of IEEE 802.15.4 operating in the 2.4 GHz provides limited range due to fundamental radio theory as mentioned earlier, and is further degraded by the environment. Moisture affects 2.4 GHz propagation significantly (this is why microwave ovens also operate at 2.4 GHz to be specifically well-absorbed by water), and any obstruction, such as a wall, door, or window, would attenuate 2.4 GHz signals more than 1 GHz.

This may be worked around by using multi-hop communication via special relay devices. These relays cannot be regular battery-powered devices since it implies continuous receiving. Literature states that such multi-hop approach increases overall power-consumption. IEEE 802.15.4 is often claimed to be a mesh topology to compensate for the limited radio coverage and reliability. Yet in practice, this is still a hybrid topology because only some particular AC-powered relays can provide relaying. Resource-constrained end devices would still see the network as a star topology.

As can be seen from some studies, multi-hop / mesh topology could be considered a future trend. However, the current single-radio approaches are not suitable for multi-hop and mesh. If relays and devices share the same medium for communication, then a mesh topology is not an efficient solution, as there cannot be multiple devices communicating simultaneously.

Moreover, it  has to be acknowledged that efficiently managing a large number of clients, ensuring their connectivity, and balancing the data flow in a star or tree topology network are already challenging enough not to add an unnecessary overhead of a multi-hop mesh solution.

Finally, IEEE 802.15.4 has not been designed to handle coexistence with other collocated IEEE 802.15.4 networks, or for device mobility. These limitations will prove to be a real problem when the number of connected devices grows dramatically in future IoT applications. Simply imagine the scenario when the nearby apartments within a same building install a compliant IEEE 802.15.4 IoT network and connected objects. IEEE 802.15.4 is not able to handle this situation. Until a solution is found to coordinate with the nearby IEEE 802.15.4 network, IEEE 802.15.4 is not a viable option for the IoT. This holds true for IEEE 802.15.4-based technologies, ZigBee and 6loWPAN, as well as BLE or Z-Wave, which have no provision for this kind of scenario as well.