The Straba (A play on Strava) Network Hub is portal for hosting data originating from multiple Carmins (A play on Garmin, a functional reproduction of a smart watch using ESP32) . The project iterated from our previous Carmin watch by collecting data from many individual smart watches, sourcing and passing data across a wireless network/routers in a bidirectional way, making data available from any browser, and aggregating data and displaying leaders for specific metrics.
We incorporated various functionalities to control a buggy with ultrasonic and LiDAR sensors for wall avoidance. Here's an overview of how the different parts of the code work together:
Upon startup, the ESP32 initializes its Wi-Fi connection and starts the UDP server to listen for commands. The LiDAR sensor continuously measures the distance in front of the buggy. If an obstacle is detected within a predefined range, the buggy stops or takes necessary action. The ultrasonic sensors on both sides assist in navigating by constantly measuring the side distances. The steer_task uses these measurements to adjust the buggy's direction. Speed is regulated based on encoder readings, and the servo motor adjusts the steering angle as per the PID controller's output. Commands received via the UDP server (start, stop, turn) control the buggy's movements, allowing remote operation. The alphanumeric display can show relevant information like speed, distance, or elapsed time.
Initialization: Wi-Fi and peripherals (like I2C, LiDAR, ultrasonic sensors) are initialized. Task Creation: Multiple FreeRTOS tasks are created for handling different components (LiDAR, ultrasonic sensors, steering, speed control, display). Operation: The buggy operates based on sensor inputs and received UDP commands. The LiDAR and ultrasonic sensors play a key role in avoiding obstacles, while the servo and motor are controlled to navigate and maintain speed.
When the UDP server receives a stop command, it activates an emergency brake (e_brake). The start command deactivates the emergency brake. The turn command initiates a 360-degree turn maneuver. This ESP32-based system showcases a sophisticated approach to robotic control, integrating various sensors and communication protocols for efficient and responsive operation.
To ensure the Carmins were connected on WiFi, we created a free DDNS account from NO-IP provider and set up local "Group 7" router to connect to the service. Then we setup our laptop as a web server (node.js server) and accessed the web server from the hosted DDNS name (on and off the BU network, e.g., from your cellular service). We made our hostname: group7.ddns.net and brought up basic WiFi functionality on an ESP32 by altering the WIFI_SSID.
Data was plotting is done through CanvasJS which followed (1) Feeding the (live) data from the serial port directly into the browser client via socket.io as the link between the node app and the client javascript; and (2) feeding the live data to the browser while saving the data to a local file (local to the node server). The ESP32 writes sensor data to the serial port which is read by the JS file which then writes the sensor data into a csv file (for storage purposes), which is then read and updated to the nodeJS server. The nodeJS server checks for after updates to the csv file and pushes it to the sensor graphs in real-time.
For time tracking we used an alphanumeric display that interfaces with the ESP32 using I2C. The time in the present is sent to the serial port from the nodeJS server once and is kept track of and incremented using the ESP32s own time tracking functionality.
We used a thermistor to measure temperature, converted ADC values from the thermistor into Celsius values, once the Celsius values exceeded a certain set threshold (40 C) the buzzer would go off.
Activity tracking was done by a single button cycling between states. Upon first press the sampling process would start and sensor data would be processed and displayed. Upon second press the sampling would stop. Upon third press the data was reset. The cycle would continue to loop start->stop->reset->start->...
To plot our data we used similar strategy to Quest 2 with a nodeJS server along with the CanvasJS graphing tool to dynamically plot our data. The JS code uses fswatch() to watch for any changes to the csv file and as soon as and since the JS writes new sensor data to the csv line by line, when the csv file updates the JS will read the data and push it to the server using socket.io. The only changes it that this time we added a new fuunctionality of the leader, for this we made a bar graph from the CanvasJS library and pushed a variable that constantly kept track of the name and total step value of the leader.
Our rover buggy is a robust platform for autonomous driving that makes the round trip between a spaceship and the hot springs of Venus. We implemented (1) "cruise control" (or maintaining a constant velocity under perturbations), (2) "turn-around" (reversing the direction of the vehicle), and (3) "collision avoidance" by detecting obstructions and driving around them.
The approach involved (a) attaching sensors and the ESP to your vehicle, (b) enabling control using feedback from the wheel speed sensor to maintain a speed setpoint (driving the vehicle motor), (c) using range sensors to detect and avoid objects, and (d) maintaining forward progress towards each waypoint (A and B)
We incorporated various functionalities to control a buggy with ultrasonic and LiDAR sensors for wall avoidance. Here's an overview of how the different parts of the code work together:
Upon startup, the ESP32 initializes its Wi-Fi connection and starts the UDP server to listen for commands. The LiDAR sensor continuously measures the distance in front of the buggy. If an obstacle is detected within a predefined range, the buggy stops or takes necessary action. The ultrasonic sensors on both sides assist in navigating by constantly measuring the side distances. The steer_task uses these measurements to adjust the buggy's direction. Speed is regulated based on encoder readings, and the servo motor adjusts the steering angle as per the PID controller's output. Commands received via the UDP server (start, stop, turn) control the buggy's movements, allowing remote operation. The alphanumeric display can show relevant information like speed, distance, or elapsed time.
Initialization: Wi-Fi and peripherals (like I2C, LiDAR, ultrasonic sensors) are initialized. Task Creation: Multiple FreeRTOS tasks are created for handling different components (LiDAR, ultrasonic sensors, steering, speed control, display). Operation: The buggy operates based on sensor inputs and received UDP commands. The LiDAR and ultrasonic sensors play a key role in avoiding obstacles, while the servo and motor are controlled to navigate and maintain speed.
When the UDP server receives a stop command, it activates an emergency brake (e_brake). The start command deactivates the emergency brake. The turn command initiates a 360-degree turn maneuver. This ESP32-based system showcases a sophisticated approach to robotic control, integrating various sensors and communication protocols for efficient and responsive operation.
For the ESP32 to receive these messages, it must be connected to a network that is accessible via the group7.ddns.net address. If the ESP32 is behind a router (which is usually the case), you'll need to set up port forwarding on your router to forward UDP traffic on port 8080 to the internal IP address of the ESP32 device.
Since your server sends commands to an ESP32 device over the internet, it's crucial to consider security implications, especially if the commands control a physical device. Make sure to secure the communication and possibly authenticate the requests to prevent unauthorized access or control.
For this quest we implemented a parking meter system, comprised of ESP32-powered parking meters and key fobs. It integrates infrared communication, QR code functionality, and LED indicators for parking space allocation. The system communicates with an authentication server for efficient space management, which is handled through a Node.js script. Key features include Wi-Fi connectivity, peripheral integration, and task management using FreeRTOS, making the system an innovative solution for parking management.
The parking meter has a OLED and a IR receiver that waits for a signal. Once a signal with the FobID is received the OLED is turned on the LED along with changing the LED to blue to indicate a loading state. The OLED display a generated bitmap QR code that contains the MeterID along with the FobID.
The key fob is an IR transmitter that sends a signal. On button press the LED indicator is changed to LOADING (BLUE) and the IR signal is sent with the FobID.
The data is initialized using query.js which creates two databases one for logging the every time the database gets queryed for authentication and one for keep tracking of the unique IP addresses of the meters and fobs.
query.js is also the code that constantly listens for requests and is the part of the system that has constant uptime. Upon initialization of the fobs and meters it pings the authentication server, if the ping is successful it sends a message that identifies the fob or meter with the device type (fob or meter) and the ID of the fob or meter. Once the server as received this message it logs the IP address it came from along with its associated IP address. Only unique IP addresses are stored in memory. Upon receiving an OK from the server recognising the device that the device is registered, the devices' ESP32 no become constant listeners awaiting instructions.
Once a query request is send through via the scanning of a QR code it sends a request to the server which broadcasts instructions only to the relevant fob and meter to change their status LEDs. For both meter and fob it sends a message TAKEN or OPEN. For the fob TAKEN means that the fob is associated with a meter and the green LED turns on. For the meter TAKEN means that the meter is taken and the RED of the meter turns on and vice versa. OPEN -> green LED for meter. OPEN -> red LED for fob.
The Smart Voting is a system leverages MQTT5 technology to establish voting stations enabling users to cast their votes for their preferred Northeastern states. Options include Connecticut, Maine, New Hampshire, New York, Rhode Island, and Vermont.
The approach involved (a) attaching sensors and the ESP to your vehicle, (b) enabling control using feedback from the wheel speed sensor to maintain a speed setpoint (driving the vehicle motor), (c) using range sensors to detect and avoid objects, and (d) maintaining forward progress towards each waypoint (A and B)
Code enables the keyfob to communicate using infrared signals and UART (Universal Asynchronous Receiver-Transmitter), and also allows it to connect to a network via WiFi.
The keyfob, once programmed with this code, operates automatically. It can send and receive data, change its display based on the received information, and communicate over a network. A user can interact with it via a button to change its ID.
This code implements a voting system using an ESP32 microcontroller. It integrates MQTT for network communication, allowing the system to publish and subscribe to topics related to voting. The application is designed to register votes from users, display voting states, and handle restrictions on voting.
This Node.js application implements a real-time voting system utilizing MQTT for messaging, Express for serving web content, and Socket.io for real-time client-server communication. It monitors and updates vote counts in a CSV file and communicates with MQTT clients.
VoteMap is a web application designed for live visualization of polling data across different states. It uses D3.js for rendering an interactive map, Socket.io for real-time updates, and a backend that monitors vote counts in a CSV file. This application provides a dynamic, user-friendly interface to display and interact with live polling data.