Article

How energy harvesting underpins the self-sustaining IoT sensor node

Nishant Nishant
flower pots containing batteries
Developments in renewable energy led to the emergence of self-sustain sensor nodes.

Editor’s note: This is the first of two articles reviewing recent advances in self-sustaining sensor node technology. The articles focus on integrating diverse renewable energy harvesting methods and exploring diverse energy harvesting methods and power management techniques. Part 1 (below) covers design principles and Part 2 covers design choices. These innovations address challenges of intermittent energy supply and ultra-low-power operation, paving the way for robust, long-lasting sensor networks in smart infrastructure, environmental monitoring and industrial applications.

Design principles for a self-sustaining sensor

When designing a self-sustaining sensor, it's crucial to take a holistic view so that each building block integrates into an energy-efficient whole. The lower the energy requirements of the sensor and its associated electronics, the more flexible the energy harvesting options.

Average figures are inadequate when assessing energy consumption. You need to understand the energy profile of the sensor and associated components. Sensors are connected to IoT networks via semiconductor devices whose functions include signal conditioning, analog-to-digital conversion (ADC), data processing, and communications over the network.

Based on a consensus of analyst estimates, around 75% of these semiconductors are microcontrollers. The rest are system-on-chip (SoC) devices. The latter, which often integrates more powerful CPUs, are typically used in applications requiring higher processing capabilities like edge AI, computer vision, or advanced connectivity. There is also a growing market for MCUs optimized for edge AI applications.

In 70-80% of cases, wireless communication is integrated into the MCUs or SoCs, but industrial or mission-critical IoT deployments (e.g., long-range LoRaWAN, cellular IoT) may use separate radios for higher performance or flexibility. It’s important to note that wireless communication, which is how 80-90% of sensors are connected to networks, is often the most energy-intensive operation in an IoT sensor node. It sometimes consumes 10–100 times more energy than sensing or processing. As a result, choosing the right wireless protocol can make or break a micro-energy-harvested IoT system.

Incidentally, wired networks still have a role, particularly for low-latency or high-reliability applications where wireless may not be acceptable due to interference or critical timing. These account for around 20% of installations but are not often battery-powered because wired connectivity is available. Power-over-Ethernet (POE) is one example.

The final piece of the puzzle is an understanding of power management and how it operates with the chosen energy harvesters.

The low-power mindset starts at the sensor element

Minimizing sensor, processor and wireless communication energy consumption is key to developing an effective and cost-efficient self-sustaining sensor node, so every building block needs to be selected with that aim in mind.

You may not have much control over the sensor's energy consumption. Photoresistors used to measure light levels consume almost no energy. Image sensors, particularly video cameras, are relatively power-hungry. To create a self-sustaining IoT node, selecting the lowest power option for the functionality and performance needed is a matter of choice.

How to choose an MCU or SoC for self-sustaining sensors

The next step is usually to select a suitable MCU. When minimizing energy consumption is high on the list of priorities in deciding any design trade-offs, these are the main characteristics to consider:

  • CPU core efficiency: Choose cores optimized for low power. These include Arm Cortex-M0+, M3 and M4 cores a, some RISC-V cores, and devices where sub-threshold logic is used to achieve ultra-low power.
  • Ultra-low power consumption: Look for deep sleep or standby current of less than 1 µA and active current of 100 µA/MHz or less, if performance requirements can be met.
  • Efficient power modes: Devices with multiple sleep modes, the ability to retain RAM and registers in deep sleep, and peripherals that can operate in low-power modes are all desirable.
  • Energy-aware peripherals: Some MCUs offer autonomous peripherals, such as sensor controllers, ADCs/DACs, and low-power communication interfaces (UART, SPI, I2C) that can be operated independently of the CPU.
  • Non-volatile memory with low write power: Flash or FRAM that supports low-voltage writes or using FRAM instead of Flash provides greater speed and energy efficiency.
  • Integrated energy harvesting support: Some MCUs now integrate or support energy harvesting power management functions, including maximum power point tracking (MPPT) algorithms, storage capacitor management and voltage supervision.
  • Voltage and power supply flexibility: A wide operating voltage range will give more flexibility in integrating with energy harvesting circuits, as will the ability to operate with the unregulated supply from energy harvesters. You may also find brown-out detection and power gating control, where power can be shut off to individual peripherals, integrated into the MCU.
  • Integrated low-power wireless connectivity: Most sensor nodes use MCUs with integrated low-power radios. The choice of protocol is critical for both achieving performance requirements and minimizing energy consumption to make energy harvesting technically and economically viable.

In general, the desirable characteristics of SoCs are the same as those for MCUs, with additional considerations related to the higher performance requirements often associated with running AI algorithms at the network edge.

Choosing best wireless protocol for self-sustaining sensors

The choice of wireless protocol is one of the most critical decisions in designing self-sustaining sensor nodes. The trade-offs are power consumption, range, data rate, and latency. This table gives an overview, but it's only a guide because environmental conditions can greatly impact the performance of wireless links. In general, the higher the frequency of operation, the more marked the impact can be. Range and data rate figures are, therefore, approximate.

block diagram
The table shows the relative performance of popular wireless communication protocols for IoT devices.

A review of analyst reports suggests that LoRA/LoRaWAN, BLE and Zigbee have the largest installed base of wireless sensors, but Thread is beginning to replace Zigbee, particularly in smart home ecosystems.

The duty cycle is another factor with a major impact on energy consumption. An environmental monitoring device may only send small packets of data once per hour, equating to a duty cycle of less than 1%. Smart home devices are still sleep-dominant but typically have moderate duty cycles of 1-10%. Higher duty cycles are encountered in industrial IoT predictive maintenance and wearable health monitors, perhaps ranging from 10% to 50%, while video surveillance may be 100% for continuous monitoring. Combining duty cycle information with data on the receive and transmit energy consumption of radios, whether built into processor chips or separate modules, begins to build a picture of the energy requirements of your self-sustaining sensor node. This table provides some typical figures for the receive and transmit current consumption of radios with the various wireless protocols. Because applications vary so widely, all figures are just guidelines for comparison purposes.

block diagram
The table shows the typical energy consumption of the most popular wireless protocols found in IoT devices. (PSM – Power Saving Mode; eDRX = Extended Discontinuous Reception)

Remember that the antenna can be a critical factor in determining the power budget for a wireless link. Wireless modules will often have integrated antennas, and their positioning and orientation will influence link performance on both transmit and receive. An external antenna, while adding some cost and complexity, may provide sufficient gain to enable the transmitter power to be significantly reduced while maintaining the required range and data rate. This has implications for the viability and performance of using energy harvesting to power the IoT node.

A further consideration is electrical noise and its impact on the signal-to-noise ratio (SNR) experienced at the wireless receiver. If this is poor, it may necessitate higher transmitter power, which increases the system's energy consumption. Industrial environments can be particularly challenging in this respect, and the power conversion process within energy harvesting systems can generate its own noise, adding to the problem.

After calculating the energy requirements profile of both the processor and the wireless link, it’s time to start thinking about how to harvest, store and control that energy.

See Part 2 of this series for the details.

 

PART 2: The choice of energy harvesters for self-sustaining sensors

About Author

Nishant Nishant
Avnet Staff

We use Avnet Staff as a collective byline when our team of editors and writers collaborate on the co...

Helpful Links

Marketing Content Spots
Related Articles
Related Articles
batteries
The choice of energy harvesters for self-sustaining sensors
By Avnet Staff   -   June 27, 2025
PART 2: This article reviews advances in self-sustaining sensor nodes, focusing on innovative designs tackling intermittent energy and ultra-low-power challenges.
Avnet designs smart hard hat reference design
/IOTCONNECT, powered by AWS, enables smart PPE, keeping site workers safer
By Philip Ling   -   March 7, 2025
Part of the value proposition for IoT technology and ubiquitous connectivity is enabling new approaches to solving long-standing challenges. Many of those challenges are incidental to the task, like keeping construction site workers safe.
Related Events
Related Events