Trends and tradeoffs of implementing multi-standard wireless in IoT devices
While Internet of Things (IoT) device manufacturers have a growing number of connectivity options when defining next-generation systems, misjudgment of the fast-changing network landscape can leave those products off the grid, literally. This has led many to hedge their bets by supporting multiple wireless standards in the same system-on-chip (SoC), which helps safeguard against obsolescence and reduces bill of materials (BOM) costs. However, the decision of whether to implement these in hardware or software can be as significant as selecting the wireless technologies themselves.
In this interview with Aviv Malinovitch, Vice President and General Manager of the Connectivity Business Unit at CEVA, Inc., he overviews trends his company is seeing in combo wireless across multiple market segments, then outlines the pluses and minuses of using software-defined radio (SDR) or multiple hardware connectivity IPs for embedded device design.
What challenges does today’s crowded networking environment pose to IoT device manufacturers?
MALINOVITCH: Indeed, IoT is a wide term that is not yet clearly defined. Different market players perceive IoT in different ways, and the landscape seems to be changing.
Per CEVA’s understanding, some communication standards such as Bluetooth, Wi-Fi, and maybe 802.15.4 will keep their strong position in IoT down the road. Looking forward, we do see the number of wireless standards being consolidated down significantly, but still see a few wireless technologies existing in parallel longer term. This will enable better performance per application, as well as communications with legacy devices. For example, low-power connectivity is Bluetooth’s domain, high-bandwidth data transfer is most efficient using Wi-Fi, and NFC is much more secure than either of those standards for short-range connectivity.
In light of that, we continue with our investments in a range of wireless connectivity technologies, as well as SDR platforms that can support multiple standards, including low data rate LTE standards such as LTE Category 0 (Cat-0) and Category M (Cat-M). This provides customers with the choice of implementing IoT connectivity in hardware with connectivity IPs running on a CPU, or on an SDR platform where greater flexibility enables designers to modify their solution to fit any communication standard. The SDR approach also offers the advantage of being able to address many segments of the diverse IoT market with a single platform or processor design that supports a pre-defined number of standards in hardware. One piece of advice for IoT developers is, if at all possible, to choose a technology out of the well-established standards mentioned that best fits their target application. By sticking to the well-established standards, you reduce the risk of going with the wrong technology.
Given the propensity of wireless standards to change fairly quickly, there has been a move towards integrating multiple connectivity IPs in the same SoC. What demand are you seeing from industry for “combo wireless,” and what are the costs and benefits of this implementation?
MALINOVITCH: We see a demand for few different connectivity combinations targeting different market segments. In the smart home segment, customers are looking for Wi-Fi 802.11n or 802.11ac 1x1 combined with Bluetooth Low Energy (BLE), and another combination that is quite popular for is BLE + 802.15.4. In the home we’re also seeing high-end 802.11ac 2x4 or 4x4 Wi-Fi together with Bluetooth or BLE for digital TVs and set-top boxes. For consumer electronics, wireless speaker applications are combining Wi-Fi 802.11n or 802.11ac and dual-mode Bluetooth. The machine-to-machine (M2M) and machine-type-communications (MTC) markets require LTE Cat-0 or Cat-M combined with Wi-Fi 802.11n 1x1, and, in some cases, GNSS as well for asset tracking, drone positioning verification, etc.
Good examples of this integration are devices for the smart home, such as LED lighting, smoke detectors, presence detection, and the like. Many of these applications are already using 802.15.4 technologies, while a growing number of them have begun to integrate BLE. These two technologies will for sure co-exist for many years to come, so to accommodate this it makes sense to have such devices and/or gateways that integrate both. This is, for instance, the case with the Google Nest learning thermostat, which comes with Wi-Fi, 802.15.4, and BLE, but not all integrated onto the same chip. The BOM in this case is higher than if only one of the three were supported, but we will see more and more combo chips integrating two or even three of these technologies into a single chip so that BOM costs become similar to a Wi-Fi, BLE, or 802.15.4 standalone chip. We already have multiple customers in the process of designing their own combo chips that support a number of connectivity standards, which are sometimes even integrated with additional functionality. Clearly this integration is complex, and as such generates some risk related to implementation. At CEVA we are aware of this, so we tune our IPs as well as our IP packages to ease integration risk and effort.
What are the tradeoffs between these combo chip architectures and going with an SDR solution?
MALINOVITCH: SDR enables the flexibility to support a few standards using same processor design. The competition between technologies will continue, and at the end of the day there will be winners and losers. Using SDR assures that no matter the final results your platform will still be useful, with a good example being customers who developed both LTE and WiMAX on a single platform – once the industry chose LTE as the standard for 4G, they could continue developing down this road without having to start from scratch.
Traditionally, most people look at hardware IP versus SDR as a tradeoff between die size and power consumption or flexibility. SDR is perceived as a “bigger die size, more power hungry” solution. Indeed, the power consumption of flexible solutions will be a bit higher than those implemented fully in hardware, but while it is true that SDR is larger and consumes more power, the delta these days is actually relatively small. This is not limited to high-end systems, as low-end applications can still be flexible with a relatively small penalty in power consumption, with our Dragonfly platform being a good example of that. Dragonfly enables support for multiple Wi-Fi standards, GNSS, and LTE Cat-0 on a single DSP core. The power consumption penalty versus a fully hardware-based solution offering the same standards is minimal, so we will see more customers adopting the flexibility of SDR for applications where it makes sense, such as the smart grid.
Considering advances in SDR technology, why isn’t everyone taking that approach for new chip designs?
It’s important to remember, “one approach does not fit all.” So, for most die size and power consumption-optimized requirements we license hardware-based connectivity IPs, and for those that want the flexibility we offer the SDR approach.
As always, software is a key part of platform development and implementation, which in many cases doesn’t get the amount of attention it needs. The software development of a flexible platform is more complex than a hardwired solution. So this is another tradeoff – the SDR approach allows much greater flexibility to adapt to new standards or upgrade existing standards, but the software development is a more complex process than implementing the system in hardware.
Theoretically SDR can provide a full flexibility. However, the reality is the more flexibility you get the more you pay in terms of die size and power consumption. The most efficient approach in multi-standard, complex use cases is to use a hybrid approach, putting some functions in hardware and some in software. From the start, this makes focusing your SDR design to address the technologies defined by your target application extremely important, as it assures the right DSP engine and additional hardwired components are chosen.