Building an app on AWS to help save energy bills and the environment

The idea behind this article came to me a while back. While writing a previous article about predicting energy demand I thought of the idea to build an app that would notify users when electricity prices are high. Originally this was an addition to the previous application but I quickly realized that regular people would get more benefit out of it.

I haven’t worked on the app itself but to summarize the back-end will gather real-time electricity prices and notify users when the price rises in their location. This allows people who use time-of-use utility plans to reduce their bills and help offload the demand on the grid during peak times. The backend of the mobile app (all the stuff the user doesn’t see) will require a web-scraping script to gather this price data from the various ISO websites. This article will discuss how I did that using only python and some AWS services.

Web-scraping is just the process of gathering data from a website using a software program. Although using APIs are desirable it’s not uncommon to find that some web services don’t provide them (or only allow stakeholders to access them). This means we need to find alternative ways to get this data. The scripts themselves are relatively simple. Once they are written though we need a way to run them automatically and in a scalable way. This is where using AWS services comes in handy. Requirements

  • Scripts that will gather the price data from ISO websites.
  • Store the price data in a database for further use.
  • Automate the scripts so they run every 5 minutes.

The script I wrote uses the requests python library to make a get request to the ISO website. The response contains the typical HTML code that can be seen using inspect elements on a webpage. Another library called Beautiful Soup is used to prettify this HTML code and allows elements to be found easily. The code for one of the ISOs can be seen below.

import requests
import logging
from bs4 import BeautifulSoup
dict = {
    "DPL":"",
    "COMED":"",
    "AEP":"",
    "EKPC":"",
    "PEP":"",
    "JC":"",
    "PL":"",
    "DOM":""
}
def main():
	    try:
            URL = 'https://www.pjm.com/'
    	page = requests.get(URL)
    	soup = BeautifulSoup(page.content, 'html.parser')

    	"""
    	There are multiple tables of class 'lmp-price-table'. This
    	loop goes through all the divs in the tables containing the RSO name 
    	and the LMP and if the key exists in the dictionary (Keys are the RSO 
    	name) then it stores the next divs text as the value
    	"""
    	for i in soup.find_all('ul', class_='lmp-price-table'):
        	    divs = i.findChildren('div')
        	    j=0
        	    while j < len(divs):
            	value = divs[j].text
            	if value in lmp_dict:
                	    lmp_dict[value] = divs[j+1].text
            	j+=1
        
    	    print(lmp_dict)


		except Exception as e:
    	raise e

	if __name__ == "__main__":
	    	    main()

A script was created for a few different ISOs which each have different websites. The scripts can be viewed in my GitHub repo below.

Github Repo

Once the scripts worked they needed to be hosted on AWS. Lambda is a service specifically for simple scripts that allows you to invoke them from other services or on a schedule with CloudWatch. More information can be found here. We also needed to store the price data in a database. I decided to use DynamoDB which is a NoSQL DB because it is much cheaper for read operations (we’ll only be writing new data every 5 minutes) and if the schema needs to change in the future there won’t be any major issues. So the Lambda function needed to have some additional handlers so that it could be ran and an additional function for putting the data to DynamoDB.

Once the scripts were working and storing data in the DynamoDB table the only thing left to do is to automate it. CloudWatch is a service which is really handy for collecting logs from events. Lambda functions by default publish logs using CloudWatch whenever it runs. Under the events section of the CloudWatch dashboard there is an option to create a schedule. I set the schedule for 5 minutes and the target as my two lambda scripts. This means we don’t have to worry about any issues with running a fail/success events can be monitored if you need to know about them (such as sending an email to you on failure to run).

There is a lot more to do with this project but for the time being, that’s everything to do with the data collection.