IoT connectivity solutions: Media access control layer and network topology

161-Datalink-MAC

Media access control layer and network topology

For IoT applications, the main characteristics of the media access layer control (MAC) that need to be considered are multiple access, synchronization, and network topology.

Multiple Access. Looking back at decades of successful cellular system deployment, one can safely conclude that TDMA is a good fit for the IoT. TDMA is suited for low-power operation with a decent number of devices, as it allows for optimal scheduling of inactive periods. Hence, TDMA is selected for multiple access in the MAC layer.

Synchronization. In IoT applications, there are potentially a very large number of power-sensitive devices with moderate throughput requirements. In such a configuration, it is essential to maintain a reasonably consistent time base across the entire network and potentially across different networks. Given that throughput is not the most critical requirement, it is suitable to follow a beacon-enabled approach, with a flexible beacon period to accommodate different types of services.

Network topology. Mobile networks using a cellular topology have efficiently been servicing a large number of devices with a high level of security and reliability, e.g., 5,000+ per base station for LTE in urban areas. This typology is based on a star topology in each cell, while the cells are connected in a hierarchical tree in the network backhaul. This approach is regarded suitable for the IoT and is therefore selected.

The network layer and interface to applications

The network layer (NWK) and the interface to applications are less fundamental as far as power-efficiency and reliability is concerned. In addition, there is more variation in the field of IoT applications. Nevertheless, it is widely acknowledged that IoT applications need to support the Internet Protocol (IP), whether it is IPv4 or IPv6. In addition, the User Datagram Protocol (UDP) and Constrained Application Protocol (CoAP) could provide the relevant trade-off between flexibility and implementation-complexity on resource-constrained devices.

Furthermore, the IoT will represent an immense security challenge, and it is likely that state-of-the-art security features will become necessary. As of today, we can assume 128 bits Advanced Encryption Standard (AES) for encryption and Diffie-Hellman (DH), or the Elliptic Curve Diffie-Hellman (ECDH) variants, can become the baseline for securing communication.

Best Electronic Shelf Label Companies

Ranking The Best Electronic Shelf Label Solution Providers.

One IoT case that fascinates me is the smart retail sector specifically, Electronic Shelf Labels.The solution replaces traditional paper price tags with connected digital price tags. Store owners can change prices instantaneously opening up a myriad of opportunities ultimately increasing store efficiency, enhancing the customer experience, optimizing inventory, and boosting revenue. Thousands of connected nodes, bi-directional communication, extremely low battery-consumption, speedy transmission to the cloud – this case oozes with great IoT flavors and it is not just a concept, it is live NOW in stores worldwide.

I will rank the current ESL vendors based on their over hardware solution, wireless connectivity solution, and demonstrations provided at the NRF Big Show and EuroCIS: two of the biggest retail shows. Every main ESL player had a booth and as a die-hard geek, I took the time to do an in-depth evaluation of each.

Worth noting: I learned the E-paper displays were all identical as there is only one worldwide vendor of the technology – E Ink. What really actually makes the ESL solution work is the wireless connectivity solution and hardware simplicity.

Here is my ranking of the best Electronic Shelf Label Companies focusing on retail.

#1  m2communication -logo NEW

Headquarters: France

M²Communication is as they said “the new kids on the block” but there is a reason this new player has emerged with significant traction.

This company is comprised of radio frequency chip-set makers. As mentioned, the wireless communication aspect of ESL is actually what makes the whole solution “work”. They developed their own sub GHz wireless communication protocol from scratch and it can do A LOT more than just ESL (I took at look at their whitepaper). The salesmen at their booths are clearly engineers wearing suit and ties which was quite a refreshment from the car salesmen at all the other booths. They were very honest and transparent in their business status. The big selling point – their demo. All the other booths had some pretty awful demos. M²Communication‘s actually worked. They had about 100 price tags on display on a wall. They allowed me to use their web based interface to change all prices to my satisfaction. They probably regretted letting me take the reigns because I spent about 30 minutes on their laptop not only changing prices but changing images and small product details. To my delight a couple seconds after I pressed the “update” button, all the tags began flashing one by one showing the new price and content. True two-way communication as each of the tags relayed the battery life and signal strength back to the computer.

HUGE differentiation – hardware simplicity. Their solution is plug and play. Their access point, responsible for communication from the store’s system to the tags is the size of a computer mouse. No professional installment required.

Definitely the most technically sound solution in the market right now. Let’s just see how strong their sales and marketing team is as they try to push this pass the giants.

#2 DD-Master-logo-CMYK.jpg

Headquarters: United Kingdom

Displaydata has a bunch of car salesmen at their booth that I felt were reading from a slide deck when I asked them technical questions. One guy went as far as telling me their display resolution was the best in the industry. I had to break the news to him that there is only one worldwide e-paper display vendor achieving identical DPI (dot per inch) . (He still insisted their displays are superior)

It took me a few tries to get to the booth’s “technical guy”. Their communication is also like M²Communication‘s: a sub GHz proprietary protocol. They did not design it themselves; they actually outsourced that work to another company who they did not wish to disclose.

I think connectivity in the sub GHz is the way to go. It avoids crowded frequencies such as 2.4 GHz crowded by Wi Fi , bluetooth, etc. Anyways the reason I have them ranked #2 is because of their bulky expensive hardware and their demo. Their “dynamic communicator” responsible for transmitting and receiving data from the tags was fairly large and needs professional installation. I was orally quoted $650-750 USD per “dynamic communicator” and larger supermarkets would need up to 10 of these giants in each installment. As far as their demo, it actually failed the first time. And with me you only get one first impression. It did eventually start working. And they were achieving relatively the same updates speed as M²Communication but only used 2 tags for their demo 😦

This company seems to have a lot of man power and are touting some impressive deployments in the supermarket industry. Good things coming for this company.

#3 SES / Imagotag

Headquarters: Austria

SES is the oldest largest ESL vendor. Their original wireless communication solution uses SUPER DUPER low RF frequency: 36KHz! The transmit speed is SUPER DUPER slow. This is the same technology used by submarines to communicate in the depths of our oceans.
To support this frequency you need a long antennae. By long I mean 1km long. Some SES installments wrap a 1KM long antennae around and around in their customer’s ceiling. Their communication is only one way. And the crazier thing is…they are currently the market leader. This is only because they got a head start in this market. They started in 1992. They recently acquired Imagotag which is another way of saying “our solution is completely out dated”. Imagotag instantly gets bumped down for using the 2.4GHz frequency as a solution. They say they use channels unoccupied by Wi-Fi and bluetooth. I believe they said they are using channels 2,3,4,6 in the 2.4GHz. But we all know that Wi-Fi is not strictly bound to those channels. There is going to be significant interference in my opinion and range from a physics point of view is not going to be as good as a sub-GHz solution.

#4PRICER-LOGO2

Headquarters: Sweden

Pricer uses infrared technology to communicate to their tags. They have a tricky installment in the ceiling of their deployments. The hardware looks hideous and quite distracting if the retailer’s ceiling is low. The infrared communication is not reliable.If a customer happens to be standing in front of the tag during the update – then it will not be successful. The good thing about their solution is that update speed should be quite fast. Range in a setting that is completed unoccupied  with all the lights off should be pretty good.

A huge problem is the security of infrared. It can be easily hacked as demonstrated by the following video which shows how you can use a Game-boy to change the prices on an infrared ESL. Yikes.

ESL for Industrial Sector

There is a rapid adoption of ESL in the industrial sector to replace the 40-year-old process of manually placing paper labels on the literally millions of containers, carts, and sub-assemblies flowing through factories every day with simple, cost-effective wireless displays

Industrial ESL provide the reliability and visual instruction inherent with paper labels along with automated tracking.

 

1. Ubiik

Headquarters: Japan

The key to adoption in the Industrial space is working with existing wireless infrastructure. Ubiik has managed to make ESL compatible with all off-the-shelf UHF RFID readers. The high adoption rate of this product in factories all over Asia places Ubiik at the forefront of ESL for the industrial sector.

Ubiik also has E-Paper that can be updated via NFC (android smartphones or any off-the-shelf NFC reader)

ezgif-com-video-to-gif

2. Omni ID

Headquarters:Rochester, NY

In 2012, Omni-ID launched ProVIEW — the world’s first visual tagging system — to replace paper-driven processes in manufacturing, providing not only the ability to track assets; but dynamic, readable instructions right on the tag, completely changing the auto-identification industry landscape. The ProView markets itself as RFID compatible E-Paper but after taking a deep dive, we realised that OMNI ID actually uses a proprietary protocol to transmit to the ProView tag. Therefore, factories will need to install Omni ID’s proprietary hardware/base station to update the displays much like the ESL in the retail space.
Omni-ID rfid tags. 3 sizes showing various information.

3. Mpicosys

Headquarters: New York, NY

Mpicosys offers a variety of customised E-Paper signage. MpicoSys has developed the PicoSign displays and enables special devices, in fact answering any requirement and questions one can have on the use of ePaper displays. One of the best examples is the PicoSign Wall at United Nations headquarters in New York.

PicoLabel-2-7_Leaves_OmniKey.png

4 Main ‘Must Haves’ for the Physical Layer of Internet of Things Wireless Connectivity

image0011210155736818

Analysis of the physical layer of wireless communication solutions for IoT application.

For IoT applications, the main characteristics of the physical layer that need to be considered are modulation, data rate, transmission mode, and channel encoding.

Modulation. The nature of IoT applications, some involve infrequent data transmission that need low-cost low-complexity devices, preclude the use of high-order modulation or advanced channel coding like trellis-coded modulation. Unless mandatory, due to a harsh radio environment with narrowband interferers or regulatory constraints, spread spectrum, e.g., Direct Sequence Spread Spectrum (DSSS), is to be avoided as it increases the channel bandwidth, requiring a more costly and power-consuming RF frontend, with no data rate improvement. Allowing non-coherent demodulation relaxes the constraint on the device complexity, so (Gaussian) Frequency Shift Keying ((G)FSK) is a proven and suitable choice, similarly as in Bluetooth radio. It is considered that the most sensible choice upon availability would be Gaussian Minimum Shift Keying (GMSK), as the modulation index of ½ allows for lower complexity, or better sensitivity at a given complexity. When available bandwidth is restricted, GFSK with lower modulation index is still appropriate, with the next best being 1/3 as it still allows for near-optimal demodulation at reasonable complexity.

Data rate. IoT applications need to mix very low data rate requirements, e.g., a sensor or an actuator with limited data size either uplink or downlink, with more demanding requirements, e.g., a 6-inch 3-color ePaper display in a home that updates the daily weather forecast or the shopping list, easily amounting to more than 196 kB worth of data. Yet even for small data amounts, a carefully chosen higher data rate actually improves power-consumption thanks to shorter transmission time and reduced probability of collision. Similar reasoning is applied to Bluetooth Low Energy, a.k.a., BLE or Bluetooth Smart, formerly Nokia’s WiBree, which uses 1 Mbps with much lower data throughput. The latter is aimed at proximity communication and its high gross data rate of 1 Mbps sacrifices the range considerably. Even when operating at sub-GHz frequencies, which offer better range than 2.4 GHz for a given transmit power, the 1 Mbps is considered to be the absolute upper limit. On the higher end, the transceiver complexity and power increase do not improve the actual useable throughput, as the overhead of packet acknowledgement and packet processing time become the bottleneck.

On the lower end, data rates below 40 kbps are actually impractical, as it would rule out using standard off-the-shelf 20 parts per million (ppm) crystals. Indeed, the frequency accuracy of these crystals is not sufficient: 20 ppm translates into a 18 kHz frequency error when operating in sub-GHz bands, while it is 48 kHz when operating at 2.4GHz. A narrow channel requires an accurate crystal like temperature-compensated TCXO on both ends, including the client, which is more costly, power-consuming, and bulky [36].The optimal baseline gross data rate is considered to be 500 kbps. Depending on the scale of the network, e.g., home, building, district, or city, the applications, and the number of devices, we expect different trade-offs with actual deployments ranging from 100 kbps to 500 kbps.

Transmission mode. Full duplex communication is challenging, as it requires good isolation and does not allow for resource sharing between transmit and receive. Full duplex also typically involves different frequencies for downlink and uplink. Since the radio resource is a scarce resource, half-duplex is therefore selected, preferably on the same radio channel.

Channel coding. There is the potential for improving link quality and performance with a limited complexity increase by using (adaptive) channel coding together with Automatic Repeat-Request (ARQ) retry mechanism. As of today, this is considered optional due to complexity-cost-performance trade-offs achieved with current technologies. However, provisions have to be made for future implementation. As of today, flexible packet length is considered a sufficient means of adapting to the link quality variations.


Media access control layer and network topology

For IoT applications, the main characteristics of the media access layer control (MAC) that need to be considered are multiple access, synchronization, and network topology.

Multiple Access. Looking back at decades of successful cellular system deployment, one can safely conclude that TDMA is a good fit for the IoT. TDMA is suited for low-power operation with a decent number of devices, as it allows for optimal scheduling of inactive periods. Hence, TDMA is selected for multiple access in the MAC layer.

Synchronization. In IoT applications, there are potentially a very large number of power-sensitive devices with moderate throughput requirements. In such a configuration, it is essential to maintain a reasonably consistent time base across the entire network and potentially across different networks. Given that throughput is not the most critical requirement, it is suitable to follow a beacon-enabled approach, with a flexible beacon period to accommodate different types of services.

Network topology. Mobile networks using a cellular topology have efficiently been servicing a large number of devices with a high level of security and reliability, e.g., 5,000+ per base station for LTE in urban areas. This typology is based on a star topology in each cell, while the cells are connected in a hierarchical tree in the network backhaul. This approach is regarded suitable for the IoT and is therefore selected.

The network layer and interface to applications

The network layer (NWK) and the interface to applications are less fundamental as far as power-efficiency and reliability is concerned. In addition, there is more variation in the field of IoT applications. Nevertheless, it is widely acknowledged that IoT applications need to support the Internet Protocol (IP), whether it is IPv4 or IPv6. In addition, the User Datagram Protocol (UDP) and Constrained Application Protocol (CoAP) could provide the relevant trade-off between flexibility and implementation-complexity on resource-constrained devices.

Furthermore, the IoT will represent an immense security challenge, and it is likely that state-of-the-art security features will become necessary. As of today, we can assume 128 bits Advanced Encryption Standard (AES) for encryption and Diffie-Hellman (DH), or the Elliptic Curve Diffie-Hellman (ECDH) variants, can become the baseline for securing communication.

Frequency Bands Optimal for the Internet of Things

United_States_Frequency_Allocations_Chart_2003_-_The_Radio_Spectrum

Frequency bands of operation for the Internet of Things

Keeping in mind that the requirements of wireless IoT are: low-power, low-cost, medium range (local to metro area), and moderate data rate, it is reasonable to assume that the frequency of operation should be between 100 MHz and 5.8 GHz. Lower frequencies would not allow for sufficient data rate due to limited contiguous spectrum availability, while higher frequencies would have a very short range. The frequency bands in this range between 100 MHz and 5.8 GHz can be sorted in four groups.

  1. Unlicensed bands: 315 MHz in US, 433 MHz almost worldwide, 780 MHz in China, 868 MHz in Europe, 915 MHz in US / Asia, 920 MHz in Japan (formerly 950 MHz), and 2.4 GHz / 5.8 GHz almost worldwide
  2. Licensed bands: cellular networks bands, granted to network operators for deploying specific cellular technologies 2G / 3G / 4G. There are 14 frequency bands defined for GSM (3GPP TS 45.005), 26 for WCDMA (3GPP TS 25.101), and 44 for LTE (3GPP TS36.101) .
    • NB-IoT in the pipeline

The main GSM bands in use are:

    1. GSM 850: 824.2-849.2 MHz uplink, 869.2-894.2 MHz downlink
    2. GSM 900: 880-915 MHz uplink, 925-960 MHz downlink
    3. DCS 1800: 1719.2-1784.8 MHz uplink, 1805.2-1879.8 MHz downlink
    4. PCS 1900: 1850.2-1909.8 MHz uplink, 1930.2-1989.8 MHz downlink

North America and Canada are using GSM 850 and PCS 1900, with PCS 1900 primarily used in urban areas and GSM 850 in rural areas for better coverage. In Africa, Europe, the Middle East, and Asia, most providers use GSM 900 and DCS 1800, GSM 900 is the most common. South America is mixed, for instance GSM 850, GSM 900, DCS 1800, and PCS 1900 are all present in Brazil.

The main WCDMA bands are:

  1. IMT (band 1): 1920-1980 MHz uplink, 2110-2170 MHz downlink
  2. PCS A-F (band 2): 1850-1920 MHz uplink, 1930-1990 MHz downlink
  3. AWS A-F (band 4): 1710-1755 MHz uplink, 2110-2155 MHz downlink
  4. CLR (band 5): 824-849 MHz uplink, 869-894 MHz downlink
  5. EGSM/U-900 (band 8): 880-915 MHz uplink, 925-960 MHz downlink

Bands 1 and 8 allow roaming in ITU Regions 1 and 3, and some countries of region 2. Bands 2 and 4 allow roaming in ITU Region 2 only.

LTE has seen a proliferation of frequency bands, and provision for global roaming would require support for bands 1 (2100 MHz), 2 (1900 MHz), 3 (1800 MHz), 4 (AWS), 5 (850 MHz), 7 (2600 MHz), 8 (900 MHz), 13 (700c MHz), 17 (700b MHz), 18 (800 MHz), 19 (800 MHz), 20 (800 DD), 25 (1900 MHz), 26 (800 MHz), 28 (700 APT MHz), 29 (700 de MHz), 38 (TD 2600), 39 (TD 1900), 40 (TD 2300), and 41 (TD 2500). When 2G allows global roaming with a quad-band device, 4G requires a 20-band, or icosa-band device.

  1. Licensed bands with exceptions: the most common is known as TV whitespaces, already regulated in Europe and the US, to allow for the usage of frequencies granted for TV broadcasters, but not used in other application areas.
  1. Forbidden bands: aeronautical, maritime, military, etc.

From the perspective of power efficiency, the lower operating frequency that is used the better. For instance, as pointed out in [36], the laws of physics imply the path loss at 2.4 GHz is 8.5 dB worse than at 900 MHz (Friis Transmission Equation), which translates into a 2.67x range improvement for the same transmit power. Electronic circuits also lose efficiency at higher frequencies. Therefore, 2.4 GHz transceivers would always consume more than 433 MHz or 900 MHz transceivers for the same performance / transmit power.

In practice, lower frequencies are yet constraining in that the required antenna size for best performance gets large: 17.3 cm for 433 MHz compared to 3 cm for 2.4 GHz. In addition, the usable frequency bands are usually narrow and duty-cycle-limited and may not allow for sufficient data throughput.

TV whitespaces are interesting as well; however, the regulatory framework is complex, with potentially very dynamic frequency allocation and the need to rely on a central entity regulating the allowed channels in real-time.

It is considered that the license-free Industrial, Scientific, Medical (ISM), and Short Range Devices (SRD) sub-GHz frequency bands (just below 1 GHz) are potentially the best for IoT applications. There has been and there still is a significant harmonization effort across all countries to enable these IoT applications, even though supporting 868 MHz (Europe), 915 MHz (USA), and 920 MHz (Japan) frequency bands already allow for a very wide coverage. Europe has opened the 863-870 MHz band and Japan has recently switched from 950 MHz to 920 MHz with relaxed constraints and higher allowable transmit power of 20 mW instead of 1 mW previously. This allows for good coverage and high data rates.

The regulatory framework for these license-free sub-GHz frequency bands is often deemed stringent and restrictive. This is actually not really the case, provided that the protocol is smart enough to implement interference-avoidance algorithms. If the RF application correctly implements the algorithms, there are actually very limited constraints. In addition, these constraints work both ways as they guarantee less interference from other devices that are operating on the same frequencies.

Today there are many frequency bands around 1 GHz licensed for cellular deployments. However, with the IoT market growing, one could reasonably expect that some operators would want to re-allocate their expensive spectrum for the IoT, much like they have started doing for M2M. In short, it is suggested that the license-free sub-GHz frequency bands to be the most promising for IoT applications, however, provisions for operation in current sub-GHz cellular bands have to be made.

Internet of Things Connectivity Option: Cellular Network Technologies

400px-Frequency_reuse.svg

Review of Existing Cellular Network Technologies: The Pros and Cons

With all the shortcomings in the incumbent technologies discussed above, one would be surprised by the absence of the most widely used and proven communication technologies by far: cellular system. Indeed, current cellular technologies manage to fulfill some of the requirements for ‘good’ IoT networks, most notably the coexistence of many simultaneously connected devices, absence of interference, high reliability, long range, and capable to service both low-data rate latency-sensitive and high-data rate applications on the same infrastructure. However, current cellular technologies have characteristics that rule them out for most of the emerging IoT applications. This section presents the review of the most prominent existing cellular technologies.

  • 2G (GSM / GPRS / EDGE): 2G is power efficient thanks to its Time Division Multiple Access (TDMA) nature and narrowband 200 kHz channel bandwidth, relatively low-cost, and very long range especially in its 900 MHz band. 2G is not actively maintained and developed anymore, and there should be the possibility of re-farming or even re-auctioning the frequency bands, potentially for IoT technologies.
  • 3G (UMTS / WCDMA / HSPA): 3G is power hungry by design due to continuous and simultaneous (full duplex) receive and transmit using Code Division Multiple Access (CDMA) that has proven to be less power-efficient than TDMA, wide 5MHz channel bandwidth to achieve high data rates (Wideband CDMA), and high complexity especially for dual-mode 2G/3G. WCDMA is not quite suitable for IoT. Even for cellular, WCDMA has evolved back from CDMA to time-slotted High Speed Packet Access (HSPA) for higher data rates, and even pre-allocated timeslots for lower power consumption in HSPA+. In addition, its Frequency Duplex means it has dedicated spectrum for uplink and downlink, such that it is best suitable for symmetric traffic, which is not typical for IoT clients. It is well-known that the battery-life is characteristically shorter when operating in 3G mode compared to 2G mode, either in idle state or during a low data rate, around 12 kbps, voice call.
  • 3G (CDMA2000 1xRTT, 1x EV-DO (Evolution-Data Only)): As an evolution from the first CDMA technology IS-95/cdmaOne developed by Qualcomm shares most of the fundamental characteristics with WCDMA, although with a narrower channel bandwidth of 1.25 MHz.
  • Chinese 3G (UMTS-TDD, TD-SCDMA): Time Division Synchronous Code Division Multiple Access (TD-SCDMA) was developed in the People’s Republic of China by the Chinese Academy of Telecommunications Technology, Datang Telecom, and Siemens AG, primarily as a way to avoid patent and license fees associated with other 3G technologies. As a late coming 3G technology with a single license granted to China Mobile and deployment only starting in 2009, TD-SCDMA is not widely adopted, and will most likely never be (as it will be deprecated by LTE deployments). TD-SCDMA differs from WCDMA in the following ways. First, TD-SCDMA relies on Time Division Synchronous CDMA with 1.6 MHz channel bandwidth (1.28 Mcps). Second, TD-SCDMA uses Time Duplex with dedicated uplink and downlink time-slots. Third, TD-SCDMA uses a narrower channel bandwidth. Fourth, TD-SCDMA has a synchronous network as all base stations sharing a time base. Fifth, TD-SCDMA provides lower data rates than WCDMA, but its time-slotted nature provides better power-efficiency, along with less complexity. Sixth, TD-SCDMA can outperform GSM battery-life in idle state, and can perform similarly in voice call, which is significantly better than WCDMA. Finally, as opposed to WCDMA, TD-SCDMA requires neither continuous nor simultaneous transmit and receive, allowing for simpler system design and lower hardware complexity / cost. These differences actually make TD-SCDMA more suitable than WCDMA for asymmetric traffic and dense/urban areas. Although TD-SCDMA is still too power-hungry to cover the most constrained IoT use cases, it could be considered the most suitable existing cellular technology for IoT.
  • 4G (LTE): 4G is more power-efficient than 3G, has reduced complexity thanks to its data-only architecture (no voice support), and its limited backward compatibility with 2G/3G. It uses Orthogonal Frequency Division Multiple Access (OFDMA) physical layer in a wide channel bandwidth, typically 20 MHz, for delivering high data rates, 150 Mbps and more with MIMO. Interestingly, the requirements for the IoT have been acknowledged and some standardization efforts are aimed at Machine-to-Machine (M2M) lower-complexity and lower-cost. Most notably LTE Release 12 Cat-0 introduces Machine-Type Communication (MTC), which allows for a narrower 1.4 MHz channel bandwidth and lower peak data rate of 1 Mbps with extended sleep modes for lower-power. Release 13 is studying the feasibility of reducing the channel bandwidth further down to 200 kHz with peak data rate down to 200 kbps, with operation in more sub-GHz frequency bands. Release 12 is foreseen to be commercially available in 2017, and Release 13 in 2018 or later [31].

One of the main drawbacks of cellular is the battery consumption and the cost of the hardware. The closest cellular solution to IoT is the Intel XMM 6255 3G Modem, the self-proclaimed world’s smallest 3G modem. The Intel XMM 6255 3G Modem is claiming an area of 300 mm² in 40 nm process (high density at the expense of higher cost and higher leakage, i.e. power consumption in sleep). Power consumption figures are 65 uA when powered off, 900 uA in both 2G / 3G idle state (with unspecified sleep cycle duration) and 580 mA in HSDPA transfer state, with a supply voltage of 3.3-4.4V (nominal 3.8V) . As a matter of comparison, a typical IEEE 802.15.4 / ZigBee SoC using 180 nm process comes in a 7 x 7 mm (49 mm²) QFN40 package with a sleep current below 5 uA and active receive / transmit under 30 mA, with a supply voltage between 2 V and 3.3 V. When normalizing to the same process, there is a 100-fold increase in area from ZigBee to cellular, which relates to the complexity of the receiver and protocol, and translates into a much higher cost and power consumption. This underlines that, although cellular-type protocols could be very suitable for IoT, existing cellular technologies are way too cumbersome and are overkill.

Another drawback of existing cellular technologies is that they operate on licensed frequency bands. This means that a licensee holder needs to manage the radio resource, e.g., a network operator that charges users high rates in order to pay for the expensive spectrum licenses. With the rise of IoT in the coming years, however, we cannot assume that the network operators will stand still. In addition, the regulatory bodies might re-assess the regulatory framework of frequency allocations.

In short, existing cellular network technologies have many characteristics that make them suitable for IoT applications. However, they suffer from the drawback of putting too much pressure on the power consumption of resource-constrained devices. In addition, they operate on scarce and expensive frequency bands. The next section presents a detailed discussion that leverages the beneficial characteristics and addresses the drawbacks of cellular technologies to define the design requirements that make cellular suitable for IoT applications.

Sigfox Pros and Cons

logo510f703a4647f

Internet of Things Wireless Connectivity Option Analysis: Sigfox

PROS and CONS

Sigfox intends to deploy a managed network, much like a cellular network, dedicated to the IoT. Sigfox uses sub-GHz frequency bands and claims that it achieves long-range communication by relying on a very low data rate of 100 bps, approximately 100 to 1,000 times less than the other IoT technologies discussed so far. Such a low data rate results in great sensitivity, which allows for long-range communication of multiple kilometers, provided there is no interference at all. Like LORA, Sigfox faces constant criticism regarding its theoretical vs actual range performance. Its actual performance substantially worse than the theoretical marketing numbers use to attract LPWAN  enthusiasts. Although a managed IoT network is a viable approach to a number of IoT applications, the current Sigfox technology has several shortcomings that make it not suitable for the widespread IoT applications.

Sigfox does not employ any collision-avoidance techniques. Consequently, Sigfox technology is put under stringent transmit power, and in Europe duty cycle, limitations by not being able to transmit more than 1% of the time. Stricter regulation in Japan enforces power spectral density limitations, essentially making ultra-narrowband inapplicable.

Sigfox’s 100 bps data rate is not practical for regular GMSK modulation. This translates into a 2 seconds transmission time for a mere 12 bytes of payload. Because of the ultra-narrowband requirement, it also mandates the use of a very precise crystal, like Temperature-Compensated TCXOs, which are more expensive than regular 20ppm crystals. Besides, such narrowband transmission is the worst type of interferer for other systems. A single Sigfox device could already interfere with any wideband system. If you consider thousands of Sigfox devices, which do not implement any fair use, collision avoidance and Listen Before Talk mechanisms.

The narrow band also makes it difficult to recover the data from the base station as the result of frequency error. Current Sigfox deployments are only one-way. Enabling two-way communication is quite challenging if possible at all. One-way communication means no acknowledgement. This means that an application can only achieve reliability by retransmitting the same data many times in case the applications did not receive it in the first place. Always transmitting 3 times, for instance, directly translates in a 3x power consumption increase, which is very inefficient for resource-constrained devices.

Relying on high sensitivity, i.e., low received power, to achieve communication in a shared frequency band is most likely bound to cause reliability issues. Although Sigfox system can theoretically achieve km range, in practice any legal and regulation-compliant system using the same spectrum and that are deployed nearby a Sigfox device, or even worse a Sigfox base station, may be enough to jam the Sigfox network.

Sigfox’s data rate is so low that even sending the smallest of data, for instance 10 bytes of information, requires a transmission time of about 10 seconds. This means the probability of collision with other devices is increased. In addition, the power consumption is high as the transmitter consumes roughly the same whether it operates at 10 bps or 100 kbps, but it has to be on for 1,000x longer, resulting in 1,000x higher energy consumed.

Such a low data rate and long packet duration, several seconds compared to typically milliseconds, make Sigfox extremely sensitive to frequency inaccuracy and interference. In particular, mobility is almost impossible, and experiments have shown that communication is unreliable over 6 km/h pedestrian speed, and may have issues with the speeds of cycling or running.

In short, Sigfox would not be a feasible IoT protocol for fast-moving and resource-constrained IoT devices that need to communicate at high data rates.

Internet of Things wireless connectivity option analysis: Z-Wave Pros and Cons

z-wave_logo

As another asynchronous wireless networking protocol, Z-Wave is designed for home automation and remote control applications. Z-Wave originated from the Danish startup Zen-SYS and was acquired by Sigma Designs in 2008. The Z-Wave Alliance was formed in 2005. Unlike most competing technologies as discussed so far, Z-Wave operates in the sub-GHz bands: 868.42 MHz in Europe, 908.42 MHz in the US, 916 MHz in Israel, 919.82 MHz in Hong-Kong, 921.42 MHz in Australia and New Zealand. The use of sub-GHz bands brings improved range, reliability, and less interference in the Z-Wave network. Nevertheless, there are a few issues worth mentioning when applying Z-Wave for the IoT.

Z-Wave offers limited data rates and mediocre spectrum efficiency due to Manchester GFSK coding (invented in 1948) which doubles the used spectrum for limited coding gain. Originally offering a low data rate of 9.6 kbps, Z-Wave has been upgraded to 100 kbps in the latest version. The Z-Wave network is limited to 232 nodes, yet manufacturers recommend no more than 30 to 50 nodes in practical deployments. Moreover, Z-Wave makes use of relays, such as wall-mounted light switches, to forward packets when devices are out-of-range.

Z-Wave uses a Source Routing Algorithm (SRA), meaning that the message initiator has to embed the routing information into the packet. This implies overhead as the route occupies space meant for the actual data payload. More importantly, this means that the initiator needs to be aware of the network topology. The network topology therefore needs to be maintained and distributed to the nodes that may initiate messages. This is a complex task and is typically not manageable by an end device constrained in computing power, code size, battery capacity, and cost. Z-Wave defines different device types with different capabilities and protocol stack sizes:

  • Controllers: have a full and largest protocol stack as they can initiate messages. The master controller, the Static Update Controller, (SUC), maintains the network topology and handles network management.
  • Mobile controllers: can support request for neighbor rediscovery from moving nodes by implementing the portable controller protocol stack.
  • Routing Slaves: depend on SUCs for network topology and can initiate messages to a restricted set of nodes.
  • Slaves: have the smallest protocol stack, can only reply to requests, and cannot initiate messages.

When using multiple controllers in the same network, only the master (SUC) can be used for network maintenance. Whenever a Z-Wave device is added or removed from the network, the network topology of the master controller has to be replicated manually to the secondary controllers. This process makes network maintenance cumbersome.

The Source Routing Algorithm, along with the network topology management, also makes it very difficult to handle mobility. There is some support for nodes to request for neighbors’ rediscovery, however, this is a complicated and power-consuming process. Taken together this does not provide anything near seamless support for mobility. In addition, Z-Wave also has security flaws, as can be seen from reports of successful attacks on Z-Wave devices.

Overall, Z-Wave has been quite successful thanks to the trade-offs it provides. Z-Wave is a lot simpler than ZigBee, yet it provides a sufficient set of basic functions for simple deployments in home or small commercial spaces. Z-Wave has a good market share for the smart home and smart building by proving the benefits of sub-GHz communication. Nevertheless, its limitations as outlined above prevent it from becoming a future-proof technology for upcoming IoT applications.