Revamping my Sensor Dashboard

New Release

Introducing SensorDash 2.0

A complete architectural overhaul designed for speed, scalability, and a superior user experience.

storage

The Engine Room

Moving from SQLite to InfluxDB

The original version relied on SQLite, which lagged with continuous streams. We migrated to InfluxDB, a dedicated Time-Series Database (TSDB).

  • Blazing Fast Queries: Instant aggregation for historical data.
  • Automatic Data Lifecycle: Retention policies keep storage lightweight.
  • Scalability: Ingests thousands of data points per second.
web

The Visuals

Embracing ShadCN UI

A powerful backend deserves a beautiful frontend. We rebuilt the UI using ShadCN UI (Tailwind) for a minimalist aesthetic.

  • Modern Aesthetics: Data is put front and center.
  • Interactive Widgets: Responsive, real-time monitoring.
  • Accessibility: Keyboard-friendly by default.
  • Dark Mode: A highly requested feature, now built-in.
compare_arrows

Version Comparison

Feature SensorDash 1.0 (Old) SensorDash 2.0 (New)
Database SQLite (Relational) InfluxDB (Time-Series)
Query Speed Slower on large datasets Real-time / Instant
UI Library Standard CSS/Bootstrap ShadCN UI (Tailwind)
Scalability Limited High

What's Next?

With this robust foundation in place, I plan to introduce customizable alerts and multi-sensor overlay graphs in the next update.

Beyond the Microsoft Store: How to Build a Custom WSL Distribution from an Ubuntu 25.04 ISO

Installing a Custom Ubuntu WSL Distribution from an ISO

Installing a Custom Ubuntu WSL Distribution from an ISO

A comprehensive guide to extracting, packaging, and repairing a custom WSL image from scratch.

Ubuntu WSL Integration

Introduction

This document provides a comprehensive guide on how to install a Windows Subsystem for Linux (WSL) distribution using a downloaded ISO file. The target distribution in this guide is named Ubuntu-25.04.

Installing a WSL distribution from an ISO is not a direct process. You cannot simply point WSL to the .iso file. Instead, the process involves extracting the core Linux filesystem from the ISO, packaging it into a compressed tarball (.tar.gz), importing this tarball into WSL, and performing post-installation fixes to ensure core utilities function correctly.

This guide details the entire journey, including initial failed attempts and the final successful methodology, providing both the necessary commands and the theory behind why certain steps were taken.


Part 1: Preparation and Initial Extraction Attempts

This part covers the initial setup and the challenges encountered during the extraction of the root filesystem. Understanding these failures is key to understanding the successful method.

Step 1: Creating Necessary Directories

Before starting, we need to create two folders on the Windows host system:

  1. A temporary location to store the extracted filesystem and the final tarball.
  2. A permanent location where WSL will store the virtual hard disk (.vhdx) for the new distribution.

The following commands were used to create these directories:

mkdir C:\temp\ubuntu-rootfs
mkdir C:\WSL\Ubuntu-25.04

Step 2: The Challenge of Filesystem Extraction

The core of the Linux OS inside the Ubuntu ISO is stored in a compressed file named filesystem.squashfs (or similar, in our case it was minimal.squashfs). The main challenge is extracting this file while preserving Linux-specific attributes like symbolic links and file permissions.

First Attempt: Using 7-Zip on Windows

The most straightforward approach is to use a tool like 7-Zip to extract the minimal.squashfs file.

Command Attempted:

"C:\Program Files\7-Zip\7z.exe" x "C:\...\minimal.squashfs" -o"C:\temp\ubuntu-rootfs"

Result: Failure.

Theory: This method failed because 7-Zip, when running on Windows as a standard user, does not have the necessary privileges to create symbolic links, which are fundamental to a Linux filesystem. It also cannot create special device files (e.g., /dev/null). This resulted in numerous "Cannot create symbolic link" errors.

Second Attempt: Using `unsquashfs` in WSL to a Windows Directory

A more advanced approach is to use the unsquashfs utility from within an existing WSL instance, which is designed to understand Linux filesystems. The initial idea was to extract directly to the temporary directory on the Windows C: drive.

Command Attempted:

wsl unsquashfs -f -d /mnt/c/temp/ubuntu-rootfs /mnt/c/.../minimal.squashfs

Result: Failure.

Theory: This attempt also failed, but for a more subtle reason. While unsquashfs in WSL can create symbolic links, the target filesystem was NTFS (the Windows C: drive, mounted at /mnt/c/). NTFS has limitations in how it handles the sheer number and complexity of Linux-style symbolic links, leading to a "Too many levels of symbolic links" error. The process also failed because it couldn't create character device files without root privileges.


Part 2: The Successful Method: Isolate and Repair

The successful strategy involved isolating the extraction and packaging process entirely within the WSL native (ext4) filesystem and then fixing the permissions as a post-installation step.

Step 3: Extraction Inside the WSL Filesystem

To overcome the filesystem limitations of NTFS, the extraction was performed inside the home directory of the default WSL user, which resides on a native ext4 filesystem.

  1. Create a temporary directory inside WSL and copy the squashfs file into it:
    wsl -- bash -c "mkdir -p ~/wsl-temp && cp /mnt/c/Users/ManupaWickramasinghe/Downloads/ubuntu-25.04-desktop-amd64/casper/minimal.squashfs ~/wsl-temp/"
  2. Extract the filesystem without `sudo`: To avoid an interactive password prompt for sudo, the extraction was performed as a regular user. This required flags to bypass errors related to creating special files and attributes, which would be fixed later.
    wsl -- bash -c "cd ~/wsl-temp && unsquashfs -f -no-xattrs -ignore-errors minimal.squashfs"
    • -f: Force overwrite of any existing files in the destination.
    • -no-xattrs: Prevents the tool from trying to write extended attributes, avoiding a class of permission errors.
    • -ignore-errors: Ignores errors related to creating special device files, allowing the extraction to complete.

Step 4: Packaging the Filesystem into a Tarball

With the filesystem successfully extracted to the squashfs-root directory inside ~/wsl-temp, it was then packaged into a compressed tarball.

wsl -- bash -c "cd ~/wsl-temp && tar -czvf ubuntu-25.04.tar.gz -C squashfs-root/ ."
  • -c: Create a new archive.
  • -z: Compress the archive with gzip.
  • -v: Verbose output (optional, shows files being added).
  • -f ubuntu-25.04.tar.gz: Specifies the output filename.
  • -C squashfs-root/: Changes to the squashfs-root directory before adding files. This is crucial to ensure the tarball does not contain an extra parent directory.
  • .: Specifies that all files in the current directory (now squashfs-root) should be added.

Step 5: Importing the Tarball into WSL

The final tarball was then moved from the WSL filesystem back to the Windows temporary directory and imported.

  1. Move the tarball:
    wsl -- bash -c "mv ~/wsl-temp/ubuntu-25.04.tar.gz /mnt/c/temp/"
  2. Import the distribution:
    wsl --import Ubuntu-25.04 C:\WSL\Ubuntu-25.04 C:\temp\ubuntu-25.04.tar.gz

    This command registers Ubuntu-25.04 as a new WSL distribution, storing its virtual disk in the previously created C:\WSL\Ubuntu-25.04 directory.


Part 3: Post-Installation Configuration and Repair

The extraction method, while successful, left critical system files with incorrect ownership and permissions because it was run without sudo. This required a final repair phase.

Step 6: Repairing `sudo` and `passwd`

The sudo and passwd commands were not working due to incorrect file permissions. They were fixed by using wsl -u root to run commands as the root user from outside the distribution, bypassing the broken sudo.

  1. Fix `sudo` executable and configuration files:
    wsl -d Ubuntu-25.04 -u root -- bash -c "chown root:root /etc/sudo.conf && chown root:root /usr/bin/sudo && chmod 4755 /usr/bin/sudo"

    Theory: /usr/bin/sudo must be owned by root and have the setuid bit (4...) set. This allows it to run with root privileges even when executed by a normal user.

  2. Fix `passwd` utility and shadow file:
    wsl -d Ubuntu-25.04 -u root -- bash -c "chown root:root /usr/bin/passwd && chmod 4755 /usr/bin/passwd && chown root:shadow /etc/shadow && chmod 640 /etc/shadow"

    Theory: Similar to sudo, passwd needs setuid to modify the protected /etc/shadow file, which stores user password hashes.

  3. Fix the `/etc/sudoers` file:
    wsl -d Ubuntu-25.04 -u root -- bash -c "chown root:root /etc/sudoers && chmod 0440 /etc/sudoers"
    wsl -d Ubuntu-25.04 -u root -- bash -c "chown -R root:root /etc/sudoers.d && chmod 0755 /etc/sudoers.d"

    Theory: /etc/sudoers is the main configuration file for sudo. It must be owned by root and be read-only for security reasons.

Step 7: Final User Configuration

With the system repaired, a user account was created and configured.

  1. Create a new user and add to the `sudo` group:
    wsl -d Ubuntu-25.04 useradd -m -G sudo -s /bin/bash manupawick
  2. Set the new user as the default for login:
    wsl -d Ubuntu-25.04 -u root bash -c "echo -e '[user]\ndefault=manupawick' > /etc/wsl.conf"
  3. Set the user's password non-interactively:
    wsl -d Ubuntu-25.04 -u root -- bash -c "echo 'manupawick:password12312' | chpasswd"
  4. Finalize the installation by shutting down WSL to apply all changes:
    wsl --shutdown

Conclusion

The installation was successful. The key takeaway is that extracting a Linux filesystem for WSL requires careful handling of file permissions and symbolic links. The most reliable method is to perform the extraction and packaging within a native Linux filesystem (like the one provided by WSL itself) and then perform targeted permission repairs as a post-installation step.

Complete Guide: Rockchip Hardware Acceleration & Automated Conversion

Rockchip Hardware Acceleration Guide
Rockchip Guide Banner

Rockchip Hardware Acceleration & Automated Conversion

Last updated:

A complete guide to enabling hardware acceleration and setting up an automated conversion service on your Rockchip-powered SBC. This post will walk you through installing the necessary drivers and deploying a systemd service to watch for and convert video files automatically.

Part 1: Installing Hardware Acceleration Libraries

For ffmpeg to use your Rockchip VPU (Video Processing Unit), it needs the correct driver libraries. There are two primary methods to get these essential components installed on your system.

Method A: Using a PPA (Recommended)

This is the easiest and most reliable method as it uses a pre-built package archive.

  1. Install PPA Tool:
    sudo apt update
    sudo apt install software-properties-common -y
  2. Add Rockchip PPA:
    sudo add-apt-repository ppa:liujianfeng1994/rockchip-multimedia
  3. Update & Install:
    sudo apt update
    sudo apt install rockchip-multimedia-config -y

Method B: Manual Installation

If the PPA command fails, you can add the repository manually.

  1. Add GPG Key:
    curl -s "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x8065BE1FC67AABDE" | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/rockchip-multimedia.gpg >/dev/null
  2. Add Repository Source:
    echo "deb [signed-by=/etc/apt/trusted.gpg.d/rockchip-multimedia.gpg] https://ppa.launchpadcontent.net/liujianfeng1994/rockchip-multimedia/ubuntu jammy main" | sudo tee /etc/apt/sources.list.d/rockchip-multimedia.list
  3. Update & Install:
    sudo apt update
    sudo apt install rockchip-multimedia-config -y

Part 2: The Automation Scripts and Service

This service automatically finds and converts your files. It consists of a conversion script and two systemd files to manage and schedule it.

1. The Conversion Script

This script finds HEVC files, converts them using hardware acceleration, and deletes the original upon success. Save it as /home/manupa/bin/convert_hevc.sh.

#!/bin/bash
WATCH_DIR="/home/manupa/Downloads/SD/TVSeries"
LOG_FILE="/home/manupa/Downloads/converter.log"

log() { echo "$(date +'%Y-%m-%d %H:%M:%S') - $1" >> "$LOG_FILE"; }

log "--- Starting conversion scan ---"
find "$WATCH_DIR" -type f \( -iname "*hevc*.mkv" -o -iname "*h265*.mkv" \) | while IFS= read -r i; do
    o="$(dirname "$i")/$(basename "$i" .mkv).mp4"
    log "Found: $i"
    if [ -f "$o" ]; then
        log "Skipping, output exists. Deleting original: $i"
        rm "$i"
        continue
    fi
    log "Converting to: $o"
    ffmpeg -c:v hevc_rkmpp -i "$i" -c:v h264_rkmpp -c:a copy "$o"
    if [ $? -eq 0 ]; then
        log "SUCCESS. Deleting original: $i"
        rm "$i"
    else
        log "ERROR: ffmpeg failed on $i"
    fi
done
log "--- Scan finished ---"

2. Systemd Service

Tells systemd what to run. Save as hevc_converter.service.

[Unit]
Description=HEVC to H264 Conversion Service

[Service]
Type=oneshot
ExecStart=/home/manupa/bin/convert_hevc.sh

3. Systemd Timer

Tells systemd when to run it. Save as hevc_converter.timer.

[Unit]
Description=Run HEVC converter every 30 minutes

[Timer]
OnBootSec=5min
OnUnitActiveSec=30min
Unit=hevc_converter.service

[Install]
WantedBy=timers.target

Part 3: Full Setup Procedure

  1. Create Directories & Files:

    Create the necessary directories and place the service/timer files from Part 2 into /home/manupa/.config/systemd/user/.

    mkdir -p /home/manupa/bin
    mkdir -p /home/manupa/.config/systemd/user
  2. Make Script Executable:
    chmod +x /home/manupa/bin/convert_hevc.sh
  3. Enable and Start the Timer:

    This command must be run without `sudo`.

    systemctl --user enable --now hevc_converter.timer
  4. Enable User Linger (For Servers):

    This ensures your timer runs even when you are not logged in. This command requires sudo.

    sudo loginctl enable-linger $(whoami)

Rockchip Guide ©

Creating a Beautiful , Stable Website using Docker and Azure App services

My Weather Station Project Journey

Documenting the journey of building a robust data logging and visualization solution.

You might remember my previous project where I built a miniature weather station. In that setup, I displayed live sensor readings using an MQTT broker app on Android. (If you missed it, you can check out the details here: Solar Weather Station with MQTT and the MQTT client I used: MQTT Client on Google Play).

Weather station setup 1 Weather station setup 2

While this initial approach worked for live data, I quickly ran into a couple of significant limitations:

  • Constant Internet Dependency: The system required a continuous internet connection to view the readings.
  • No Historical Data: Crucially, the MQTT setup didn't store any past data, making trend analysis impossible.

To overcome these challenges, I needed a more robust solution: a web application with a dedicated data logger.


What's a Data Logger and Why Did I Need One?

Data logger concept

In my previous setup, sensor data was published to an MQTT broker and displayed instantaneously. However, this data wasn't being saved anywhere. To address this, I needed to:

  1. Capture the Data: Subscribe to the MQTT topics.
  2. Store the Data: Save these readings into a persistent database.

For this, I developed a Python script that subscribes to the relevant MQTT topics and logs the incoming sensor data into an SQLite database. Each entry is timestamped with the current UTC time, ensuring that the logged data is traceable and clear for analysis.


Building the Web Application: Tech Stack and Features

With the data logging mechanism in place, I built a web interface using:

  • Backend: Flask (a Python web framework)
  • Frontend: Basic HTML, CSS, and JavaScript

This web application provides several key features:

  • Average Sensor Readings: Displays summarized information for various sensors.
  • Historical Data Visualization: Allows users to view past data, classified by minutes, hours, and days.
  • Data Export: The entire database can be downloaded as a CSV file for offline analysis or use by customers.
Web application interface

Navigating Deployment Challenges: From VMs to App Services

I tried deploying the system in a VM in Azure, but I faced several issues:

Azure VM issues diagram Security concerns diagram
  • Lack of HTTPS out-of-the-box: Securing the application required manual SSL certificate configuration.
  • No Friendly URL: Access was via a public IP address, which isn't user-friendly.
  • Security Concerns: Managing security on a VM can be complex.
  • Manual Management: Updates and maintenance were time-consuming.
  • Performance Issues: Lower-tier (SKU) VMs often froze under load.

After discussing these issues with friends, (Thank you Tharindu 😉) the clear recommendation was to containerize the application using Docker and deploy it as an Azure App Service. This approach offered a much smoother path.

Docker and Azure App Service

Azure App Services provides various ways to run web applications, including options for static web apps, web apps with databases, and WordPress sites. Given my need for a dynamic application with features like persistent storage and a separate, continuously running data logger, I opted for the Web App service.

Azure App Service options

To streamline deployment, I packaged the entire project, including the Flask application and the Python data logger, into a Docker image. This image was then pushed to GitLab's Container Registry and made public, allowing Azure App Services to easily pull and deploy it.

Container source selection in Azure Azure App Service configuration

You can select your container from different sources when setting up the App Service.

App Service creation

Finally, the system is running!

System running successfully

The Result: A Secure and Scalable Web App ✨

Dashboard Link

I'm thrilled to say the system is now running smoothly! The final website is accessible, secure, and much easier to manage.

This project was a fantastic learning experience. I now have a secure (HTTPS-enabled) application with minimal maintenance overhead, thanks to Azure App Services. It also benefits from inbuilt scaling capabilities and protection mechanisms like Azure Front Door (if configured).

Final application view

Future Enhancements 🛠️

While the current version is a significant step up, there are a couple of features I plan to add next:

  • User Authentication: Implement a sign-in page to manage access for different users.
  • Integrated Database Solution: Migrate data storage from the SQLite database within the Docker container (which can be difficult to access for direct downloads) to Azure's inbuilt database services. This will make data management and backups more robust and accessible.

© 2025 Manupa Wickramasinghe. All rights reserved.

Proudly built with Tailwind CSS Gemini and a passion for IoT.

Solar Weather Station with Supercapacitors

Solar Weather Station with Supercapacitors | A Modern Approach
Solar-powered weather station setup

Powering the Future: Integrating a Miniature Weather Station with Solar and Supercapacitors

Published on: May 10, 2025

In our ongoing exploration of innovative solutions for sustainability, I've embarked on a project to integrate a miniature weather station with solar power. To ensure reliability and longevity, we've opted for the use of a supercapacitor as our backup energy storage solution. This post details the journey, the technology, and the performance insights.

Starting the Journey with Supercapacitors

I began by testing the application of supercapacitors in various projects. This experience has been invaluable as I now want to apply what we’ve learned to make our weather station more sustainable and efficient.

Converting Our Miniature Weather Station to Solar-Powered Operation

Miniature weather station components and solar panel

The next phase of this project involves converting the miniature weather station to run on solar power. By integrating a supercapacitor, which acts as a buffer between the variable output of solar panels and the constant demand from our weather station sensors, we can achieve reliable and consistent readings even during periods of low sunlight.

Why Supercapacitors?

2.7V 30F Supercapacitor component

2.7V 30F Supercapacitor

Supercapacitors offer several advantages over traditional batteries such as lithium batteries. Here are the key reasons why they are well-suited for our solar-powered weather station project:

High Power Density

  • Quick Current Delivery: Deliver high currents quickly, suitable for intermittent and peak loads.
  • Immediate Response to Demand: Provide immediate power when needed, ensuring consistent readings during low sunlight periods.

Long Cycle Life

  • Improved Efficiency: Maintain efficiency better under frequent discharges and recharges.
  • Reliability in Variable Conditions: Robust performance in varying environments (e.g., hot direct sunlight), ensuring consistent operation over time.

Environmental Considerations of Traditional Batteries

While supercapacitors offer benefits, it's also worth noting the environmental impact of traditional battery solutions they can help mitigate:

  • E-waste Concerns: Traditional batteries can pose significant e-waste issues due to toxic components (like heavy metals) and high recycling costs or complexities.
  • Resource Intensive: The manufacturing of many battery types requires substantial resources (e.g., lithium, cobalt), the extraction and processing of which can have considerable environmental implications.

Opting for solutions like supercapacitors, especially in applications where their characteristics are a good fit, can contribute to reducing these environmental burdens.

The Solution: A Block-by-Block Breakdown

My project breaks down into several key stages, utilizing the following main components:

  • ESP32C3 Supermini: The microcontroller brain.
  • BME680 Module: The environmental sensor.
  • TPS63802 Buck Boost Converter Module: For stable power delivery.
  • Solar Panel (5V 200mA): For energy harvesting.
  • Supercapacitors (2x 2.7V 30F in series): For energy storage.

The process involves:

  1. Energy Harvesting: Capturing sunlight using a solar panel.
  2. Energy Storage: Storing the harvested energy using supercapacitors.
  3. Power Management: Providing a stable voltage to the microcontroller and sensor despite fluctuating storage voltage.
  4. The Brain: An ESP32C3 microcontroller handling sensor reading, WiFi, and data transmission.
  5. The Sensor: A BME680 measuring temperature, humidity, pressure, and gas resistance.
  6. The Software: Firmware running on the ESP32C3 to control everything.

Let's look at each part in more detail.

1. Harvesting Sunlight: The Solar Panel

The energy source for this project is a 5V 200mA solar panel. This is a standard, relatively small panel suitable for low-power applications. It captures sunlight and converts it into electrical energy.

2. Storing the Energy: Supercapacitors

Instead of a traditional battery, I'm using two 2.7V 30F supercapacitors connected in series. Supercapacitors are fantastic for this kind of application because they can charge and discharge very quickly and tolerate a huge number of cycles compared to batteries.

  • Series Connection: Connecting the two 2.7V caps in series allows them to handle a higher total voltage (up to 5.4V). However, connecting capacitors in series reduces the total capacitance – in this case, two 30F caps in series behave like a single 15F capacitor.
  • Charging Protection: A diode is connected in series between the solar panel and the supercapacitors. This is crucial! It acts as a one-way valve, allowing current to flow from the solar panel to the supercapacitors, but preventing the stored energy in the supercapacitors from flowing back into the solar panel when the sun isn't providing enough voltage (like at night).
  • Balancing Act: When connecting capacitors (especially supercapacitors) in series, it's essential to ensure the voltage is shared equally across them. I've implemented voltage balancing using 4.8 MegaOhm resistors connected in parallel with each supercapacitor. These resistors help bleed off excess voltage, allowing the voltage to equalize. For more on this, see resources like the Analog Devices document on supercapacitor balancing.

3. Stable Power for the Electronics: The Buck-Boost Module

The voltage stored in the supercapacitors will fluctuate (from near 0V up to ~5.4V). Microcontrollers and sensors need a stable operating voltage. This is where the TPS63802 Buck-Boost module comes in. It takes the fluctuating voltage from the supercapacitors and converts it into a stable 3.3V output, ideal for the ESP32C3 and BME680. (The module can also be set to 4.2V or 5V).

TPS63802 Buck Boost Converter Module
TPS63802 Buck-Boost Converter Module.

4. The Brains: ESP32C3 Microcontroller

The core of the system is the ESP32C3, a modern, low-power microcontroller with built-in WiFi. It's responsible for:

ESP32C3 Supermini Microcontroller
ESP32C3 Supermini.
  • Initializing and reading data from the BME680 sensor.
  • Managing WiFi connectivity.
  • Formatting the sensor data.
  • Publishing the data to an MQTT broker.
  • Potentially managing power modes (area for future optimization).

5. The Sensor: Bosch BME680

For environmental data, I chose the BME680 sensor. This single module from Bosch measures:

BME680 Environmental Sensor Module
BME680 Environmental Sensor.
  • Temperature
  • Relative Humidity
  • Barometric Pressure
  • Gas Resistance (related to air quality)

The Software: Firmware and Libraries

Bringing the hardware to life requires robust software. The firmware running on the ESP32C3 is based on an open-source project specifically designed for interfacing with the BME680 and connecting via WiFi/MQTT. The code I'm using is from this repository: manupawickramasinghe/bme680-wifi-sensor-firmware.

This firmware is built using the ESP-IDF framework (Espressif IoT Development Framework) and leverages an open-source driver for the BME680: gschorcht/bme680-esp-idf.

Here's a summary of the key features provided by this firmware:

  • Multi-Interface Support: Communicates with BME680 via I2C or SPI.
  • ESP32/ESP8266 Support: Compatible with various ESP platforms.
  • WiFi Connectivity: Includes SmartConfig for easy provisioning.
  • MQTT Data Publishing: Sends sensor readings to an MQTT broker.
  • Advanced Sensor Configurations: Allows customization of BME680 settings (oversampling, IIR filter, heater profiles).
  • Automated Builds and Testing: CI/CD using GitHub Actions for code quality.
  • Structured Data Output: Formats sensor readings into JSON for MQTT.

This firmware provides a solid foundation, handling the complexities of the BME680 interface, WiFi connectivity, and data transmission.

How It All Comes Together

In summary, sunlight hits the solar panel, its energy is directed via the diode into the supercapacitors for storage (kept balanced by resistors). The TPS63802 buck-boost takes the variable voltage from the supercaps and outputs a stable 3.3V. This 3.3V powers the ESP32C3 and the BME680 sensor. The ESP32C3, running the custom firmware, reads data from the BME680, connects to the local WiFi network, and publishes the environmental readings as a JSON message to an MQTT broker.

Conceptual Circuit Diagram:

     +---------------------+     +---------------------+
     | Solar Panel (+)     |     | Solar Panel (-)     |
     +--------|------------+     +------------|--------+
              |                               |
              |  Diode (e.g. Schottky)        |
              |   --->|---                     |
              |    |                          |
              +----o--------------------------+
                   |
                   | Supercapacitor Bank (e.g., 2 in series)
                   |
         +---------+---------+
         |                   |
       -----               -----
R_balance1 || --- C1 (2.7V, 30F)--- || R_balance2
       -----               -----
         |                   |
         +--------o VIN -----+ (Input to Buck-Boost)
                  |
          +-------V-------+
          |  TPS63802     |
          |  Buck-Boost   |------> VOUT (3.3V) ----> VCC_ESP32 & VCC_BME680
          |  Module       |
          +-------|-------+
                  | GND
                  |
------------------o---------------------------------- GND
|                                                    |
|                                                    |
+---------|----------+                             +---------|----------+
|  ESP32-C3          |                             |  BME680 Sensor     |
|                    |                             |                    |
|         GPIO (SDA) o-----------------------------o SDA                |
|         GPIO (SCL) o-----------------------------o SCL                |
|                    |                             |                    |
|              GND   o-----------------------------o GND                |
|              3.3V  o-----------------------------o VCC (from 3.3V VOUT) |
+--------------------+                             +--------------------+
                        
Assembled weather station electronics with ESP32 and supercapacitors
The assembled electronics package, ready for deployment.

This creates a self-sufficient system capable of monitoring temperature, humidity, pressure, and air quality, powered entirely by the sun and stored energy in the supercapacitors.

Performance Insights: What the Data Tells Us

Analyzing the collected data revealed a lot about the system's behavior:

  • Temperature Trends: We see the expected daily warming/cooling and the sensor's fine resolution (small ±0.02 °C noise).
  • Location Impact: A significant peak spike up to ~46°C highlighted a major issue: the sensor wasn't measuring ambient air, but the heat from hot concrete it was initially placed on. Relocating helped mitigate this.
  • Power Fluctuation Noise: Subtle temperature jitter was more apparent in the morning and evening, likely linked to power instability during low light/charging transitions.

Crucially, the data reporting frequency varied significantly based on light conditions:

  • Morning (7 AM+): Starts slow (~15 min/message), increasing to ~5 min/message as sunlight intensifies.
  • Mid-day (10 AM+): Becomes very frequent (almost every second) under full sun. Clouds cause it to drop back to 5-15 mins.
  • Evening (After 5 PM): Frequency drops drastically (5-15 mins -> 30 mins), stopping completely after 6:30/7 PM as supercapacitors deplete without solar input.

This changing frequency is a direct indicator of the system reacting to available solar power and supercapacitor voltage. More light means more power, allowing the ESP32 to operate and transmit more often. Less light means less power, forcing it to slow down or eventually enter deep sleep until sufficient charge is regained.

Data Plot Sample 1 Data Plot Sample 2 Data Plot Sample 3 Data Plot Sample 4 Data Plot Sample 5 Data Plot Sample 6

Current Limitations & What's Next

While the project is operational and demonstrates the concept effectively, I've identified a few areas for improvement and future development:

  • Sensor Accuracy: The BME680 is sensitive to heat generated by the ESP32C3 itself, which can affect temperature and humidity readings if not properly isolated.
    Future: Design an enclosure that physically separates the sensor from the main microcontroller board, ensuring better airflow around the sensor.
  • Placement Heat: Initial tests showed that heat absorbed and radiated from the surface the device sits on (e.g., concrete) can significantly skew readings.
    Solution: The device is now mounted on a wooden pole to minimize ground heat interference and improve air circulation.
  • Power Efficiency: The current firmware isn't fully optimized for ultra-low power consumption, potentially draining the supercapacitors faster than ideal during prolonged low-light conditions.
    Future: Implement aggressive ESP32C3 Deep Sleep modes, optimize WiFi connection/disconnection times, minimize active mode duration, and refine supercapacitor balancing for minimal quiescent current.
  • Solar Charging Circuit: The current simple diode-based charging is functional but incurs a voltage drop.
    Future: Explore more efficient solar charging ICs (e.g., MPPT controllers suitable for low power) to maximize energy harvesting and reduce losses.