Mike Bueno

Mike is one of our members who is into EVERYTHING and successful entrepreneur!

His latest interest is working with Internet of Things (IoT), and automating his home.  Expect some great How-To articles from Mike here!

About Mike Bueno

Energy conservation… Heating and cooling our homes and workplaces, contributes greatly to global warming and our carbon footprint. Here’s one technology-based solution…

My home office has always been warm due to the computer hardware, a constantly running 3D-printer, and the fact that it’s located on the second floor of my home, and in direct line-of-sight of the sun, no matter the season.

Several years ago, I automated the opening and closing of my office windows so I could take advantage of the cooler air outside as offered my Northwestern US geography and climate. The idea was to open the windows when the outdoor temperature was more favorable than the inside. In my case, this typically meant I wanted the room to be cooler, but the idea works for warming a room just the same, like a cold basement…

I defined an ideal temperature of, say 70° F, by way of an adjustable thermostat that I could change as needed, and developed an heuristic to open and close my windows based on indoor temperature, and outdoor temperature, humidity, rainfall, solar, and wind information. This system, which I’ve dubbed openWindows 1.0, ran fairly well for several years, until I got tired of the poor reliability of how the different electronics modules communicated with one another.

Think about it… In order to make this system work, I need sensors inside and outside of my home for the weather information. I also need a motor at each window I want to operate, and a microprocessor and electronics to drive these motors. Then, I need one microprocessor to communicate with all the rest, …to orchestrate the concerto. In order to make this all work, the electronics modules/microprocessors have to communicate reliably with one another.

openWindows 1.0 worked very well for quite a while, but reliability became an issue over time. I had been using 2.4 GHz XBee radios from Digi. Digi makes great radios, and since my first windows automation attempt, their technology has grown by leaps and bounds. The reliability problem I’d experienced was due to the software library I’d been using for inter-device communication. It didn’t work very well, even in this very simple system.

Embarking upon the next incantation of the windows automation for my office, openWindows 2.0, I decided to check out the latest tech, and stumbled upon the esp8266 microprocessor by China-based Espressif Systems. Espressif packs a hearty CPU, lots of flash memory, and a WiFi chip onto their systems for very cheap. I can get 5 of these microprocessor boards for $25 US !!!

Another possibility for this project is the dual core ARM-based Particle Photon by Particle. This solution has WiFi as well, and a web-based programming interface with excellent code examples. Particle also has GSM (cellular) solutions for wider-area IOT solutions…

These microprocessors are ideal for home automation, because they very handily connect to my home’s WiFi router. Instead of my prior, awkwardly-implemented XBee solution, I can now communicate very easily between the different electronics modules (the indoor and outdoor sensors, and each window’s motor drivers, etc.) over the WiFi that is already in my home.

Below is a rough-draft of how I plan to do this. These are both works-in-progress (the project itself, and this article)…

My current project, openWindows 2.0, involves opening and closing my home office windows using stepper motors. The main idea is to open the windows when the outside temperature is more favorable than the inside temperature, based on the thermostat setting in my office. There will, of course, be indoor sensors for temperature and humidity, and outdoor sensors for temperature, humidity, wind, rain, and sunshine/UV. There will be at least two sets of outdoor sensors, one on the west-facing side of my home, and one on the east. This is to catch the daily change in sun position. In the morning, the back of the house is warmed by the sun, because it faces east. In the afternoon, the sun heats the front of the house, which faces west. Thus, both sides of the house, and the two sides of my office, have different outdoor temperatures based on time of day, and amount of sunlight that makes it through the clouds on any given day, and of course, on the season.

The system will also incorporate Weather Underground’s API to get current local weather and forecast information.

The whole system will eventually learn how much to open the windows based on how effective opening them to a certain degree effectuates a stable, optimal indoor temperature. This learning will only take place with prior training, in A.I. parlance. Over time, automatically training the system by using a pre-specified trial-and-error technique and storing the information in a database and then feeding it into the A.I. software will allow me to adequately train the A.I. system. 

After the training phase is complete, which is expected to last at least a full year in order to be considered adequate, (even though 6 months will represent most weather variations in my location, but will fall short in many other geographies, i.e., monsoon seasons in India and the like), the A.I. system will then be able to control and predict window operation based on a priori knowledge generated from the training data. 

To put it more simply, after training the windows how much to open during a variety of indoor and outdoor weather scenarios, the windows will then know how to operate, and will able to predict how much to open/close based on the previous training data. The windows will know to close right before it gets too hot outside, to preserve the inside cool. (…and conversely/analogously with heat.) 

The predictive data will help if, for example, it is expected to be particularly hot during the next few hours, but it is currently cool outside. With this information, the system can open windows on the shady side of the house before the heat sets in, getting a head start at cooling the house beforehand, and conserving energy for that day, and in the long-run.

The outdoor wind sensor will assist in two ways, as combined with current or predictive rainfall information:

  1. If it is raining, the windows can open and close as they typically would… based on inside and outside weather conditions. If, however, it is raining and fairly windy, the windows should remain closed, or at least partially closed, based on the amount of wind, and the variance of the wind itself (i.e., wind gusts).

  2. The degree of wind will affect how much the windows should be open to heat/cool the room as well. If it’s windy, opening the windows to the same degree as when not will change the indoor temperature more rapidly.

The system will also monitor my home’s central HVAC, simply by monitoring the temperature of air flowing through a duct in my office. If the heat is on, it will open the windows more, provided I’m trying to keep the room cool. If the whole-house AC is on, the windows will remain closed to retain the cold, provided the air inside is also too warm in relation to the thermostat.

Another feature of the system will automate the operation of an additional mini-split AC unit (with heater) in my office, (separate from the home’s central HVAC system, and independently-operated). 

The mini-split AC unit will be controlled with an infrared emitter, similar to those found in all those remotes in front of your big, flat-screen TV. If it is too hot inside and out, the office AC unit will kick on and keep my office cool, but only during hours when I’m in it, expected to be in it, and I’m at home (based on my cell phone being in proximity to my house and/or simple occupancy sensors in my office and home). Needless to say, when the office (or whole-house) AC unit is on, the windows will close to conserve the cool. The windows will operate analogously with the heat, provided the room ever gets too cold (which never seems to happen in my case, but it could for another room such as a basement, for example).

In addition to the stepper motors, the sensors, and the machine learning that must be accomplished to accurately predict when to open and close my windows, the system will implement a rechargeable, battery-operated, wireless touchscreen remote control based on Nextion’s enhanced 7-inch touchscreen display as its primary user interface.

This will allow me to do several things:

  1. Adjust the thermostat that describes to the system the desired optimal temperature of the room.

  2. Allow me to manually override automated operation, allowing me to open/close the windows to any degree with onscreen buttons and/or sliders, widgets, etc.

  3. Operate the AC/heating unit in the office manually, if desired.

  4. Turn the whole system of window motion and AC/heating control on and off.

  5. Display indoor and outdoor temperature, humidity, wind speed, and sunlight and UVA/UVB information from at least two sides of my home, as the temperature can differ same by as much as 10-15° F based on the time of day, and time of year. The display information will be composed of a bevy of available widgets, including gauges, buttons, sliders, etc., as well as weather icons that show current weather conditions and forecast information.

In addition to providing control of the system, and information about my immediate weather conditions, the display will also provide weather data and forecast information as provided by Weather Underground’s API

The Nextion display and its programming interface allow for the easy creation of pages of user interface, so it will be relatively painless to create a multi-page menu system to control any aspect of the system. The Nextion programming environment includes the easy creation a multitude of widgets including buttons, sliders, and gauges. These can be configured to either monitor or control the system. Handily, it is flashed (in “microprocessor-programming-speak”) and controlled by it’s own suite of dedicated microprocessors, offsetting nearly all overhead from the microprocessor that actually controls the system. The display’s information is passed between itself and one’s microprocessor of choice using just 2 wires, as a simple serial connection.

In addition to the Nextion display, the entire system can be operated from anywhere in the world, so long as there’s an internet connection to the U.S. I can use my cell phone, my computer, or my laptop to control and monitor any aspect of the entire system remotely.

I’m considering 3 distinct microprocessors for this project and its various components:

  1. From Espressif Systems, the esp8266, and the dual core ESP-32, each with integrated WiFi. The esp8266 costs around $5 US, and the ESP-32 about $7, both in individual quantities. These are powerful microprocessors with varying degrees of functionality including the aforementioned WiFi, SPI/HSPI, I2C, standard serial connectivity, etc… Both of these microprocessors cost significantly less when purchased in higher numbers or in bare-boned configurations (without supporting electronics). Of particular interest are the Wemos D1 mini, and Wemos D1 mini pro. The latter has a larger, more powerful, ceramic WiFi antenna, and has 16 MB of flash memory. The former has an on-board, PCB-based antenna, and comes with 4 MB of flash memory. The dual core ESP-32 is similar to the esp8266, but has 2 cores. This is handy for applications where a timing-critical task needs to be done while the WiFi is running. One core can handle the WiFi, leaving the other core free to do all the rest.

  2. The other candidate for at least one part of this project will be the dual core Particle Photon from Particle. This microprocessor has WiFi functionality and all the functionality of the Espressif chips mentioned above. This microprocessor is significantly more expensive than its previously mentioned counterparts, however, costing $19 US at the time of this writing. It is inherently easy to use, however, as Particle’s web-programming interface provides context-sensitive help and well-documented code examples.

All three of these microprocessors can also be configured to be programmed Over the Air (OTA). This means I don’t need to physically connect them via USB to reprogram or flash them. Software updates can be readily performed from my local network, or from anywhere in the world with a slight modification to my home’s network configuration (by opening a port on my router).

Time permitting, I’d like to control other windows in my home. My home office is on the second floor, so locking the windows is a non-issue for me. Controlling the downstairs windows using the same stepper motors would require the addition of automated window locks. I’ve already ordered two of them from Banggood, and they cost $6 each. This will allow me to lock/unlock the downstairs windows in the same way as those upstairs, but lock them or only allow them to be partially open, yet locked, at night, or when nobody is home, or when nobody is downstairs in my home during the day, for example.

Communication between all the microprocessors in the entire system, including occupancy sensors in and outside of my home, (outside because I don’t want to lock our back sliding door when my kid is jumping on his trampoline), is being done with the use of a $35 Raspberry Pi microcomputer running Raspbian OS. This ultra-cheap microcomputer is running an MQTT (Message Queuing Telemetry Transport) server called Mosquitto.  

From mqtt.org: MQTT is a machine-to-machine (M2M)/”Internet of Things” connectivity protocol. It was designed as an extremely lightweight publish/subscribe messaging transport. It is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is at a premium.

MQTT allows different computer architectures with dissimilar communications protocols to exchange information very easily by passing strings of data and/or numbers. These strings or numbers are passed between server and client, or publisher and subscriber (in MQTT parlance).  

Mosquitto is simply the type of MQTT server I’m using. It installs very easily on the RPi and is super-lightweight and simple.

MQTT also features authentication for security purposes.

Using MQTT, I can control my office thermostat or manually operate my windows using my touchscreen display, my mobile phone, or any connected computer anywhere in the world…

After all is said and (almost, …but never really) done, I plan to integrate Amazon’s Alexa/Echo voice recognition and information services (using Amazon Voice Services, AVS). 

Currently, Alexa controls some of the lights and dimmers in my home, and an ultraviolet bug zapper.

To control the bug zapper, I say:

Alexa, turn on bug zapper.

…and she says,

OK…

After integration with Alexa, I can say,

Alexa, open my windows 37 percent.

…or,

Alexa, change office thermostat to 71.5 degrees. 

…and she, and the windows in my home, will gladly obey.

…and when I say “Thank you.”, afterwards, Alexa will say,

No worries… !!!

Addendum, 2018-06-28:

The inspiration for using A.I. as part of my windows project, and the use of the Tensorflow A.I. package came from an incredible outing I took on 2018-05-17. This event I attended was sponsored by Seattle-based Makerologist. At this event, we built what are called Donkey Cars, which are (from donkeyCar.com):

An opensource DIY self driving platform for small scale cars.

These cars are made from off-the-shelf, inexpensive electronics that include an ~$25 Raspberry Pi microcomputer (an older, more affordable model than the one I’m using, yet entirely capable and powerful), an electronic speed controller found in typical remote-control cars, and a couple circuit boards. These cars use the Raspberry Pi’s WiFi to communicate with the (desktop-sized) servers that run Tensorflow. These cars are largely made from 3D-printed parts, and the whole package is very reasonably-priced. The whole Donkey Car platform is opensource as well. Tensorflow is free to use.

The A.I. is trained using the native Pi’s camera, which costs about $5. Using this camera, the cars are first trained (in A.I. parlance) to go around the track. The operator accelerates and steers the car using a joystick while driving around a track that consists of white masking tape on either side, with a yellow dotted masking tape line down the center of the road. There are also miniature caution cones placed around the track, especially on the turns. While being trained, the Pi’s native camera sees the track, and in real time reports what it sees via WiFi to the desktop server’s A.I. software (Tensorflow, and other supporting software like Python).

After the car is trained, and the numbers from the training session are crunched, the fun begins…

Now, the car knows how to navigate the track on its own! The car is placed on the same track, and the operator only accelerates the car as it goes around. The car steers itself! As it goes, it’s onboard Raspberry Pi communicates with the server, because there’s simply too much data and on-the-fly computing for the Pi to handle. With a reasonable amount of training data, these cars perform quite well, even when encountering other cars on the track.

When I went to this event, the cars performed horribly! There were several reasons for this:

  1. The WiFi was saturated by the ~50 people in the room, along with their cellphones. WiFi saturation alone is an issue with multiple cars running at once, but with all the extra bodies and their mobile devices, it was an order of magnitude more congested.

  2. We didn’t train our cars for very long. The Donkey Car people train much longer, and also know how to drive the track. The guy from our group drove as fast as he could around the track during the training session, and went off the track several times during the process. A real training session would involve a skilled driver, at controlled speeds, such that the data in better represented what the car should do.

  3. Both during the training session, and when the cars self-drove, there were some precocious kids interfering with the cars’ vision. They were running around the track, lying down over parts of it, etc. They also moved the caution cones around the border and turns between the training and self-driving stages. It’s a garbage-in/garbage-out situation for sure.

This didn’t matter, however. We had such a great time building the cars (by assembling the electronics with wires and affixing to the cars with a few screws), and then training and watching the cars try to drive around this now makeshift track that we didn’t care at all! We let the kids drive and crash our car over and over! It was a blast! (Thanks Makerologist!)

The Donkey Car website has step-by-step instructions for assembly, from the 3D-printed parts, to the off-the-shelf electronics required, to installing and running the A.I. software they use. Everything you need is on their site. All you need is a decent desktop computer to use as a server, but even an old one might work if you wanted to wait a while for the numbers to crunch (not sure of computing requirements during training and self-driving phases though).

Donkey Car enthusiasts even have contests as to who can design the best A.I., who can best navigate a track with x cars on it, etc.

You can put some masking tape in your driveway or your street, and make your own, and learn a lot.

Here’s a couple videos I took from that event. Note: the cars crash for the aforementioned reasons. The performance is very unrepresentative of how they typically perform.

The first video is of our group’s car, and others on the track, along with the roaming kids and the then-moved caution cones.

The second is similar, except (according to a Donkey Car guy there) no Donkey Car enthusiast had tried it themselves until I did it… I affixed my mobile phone and its camera to the top of our car so we could get the view of the track from its perspective.

No, I didn’t take the time to edit the videos, or add captions or anything. I just grabbed them and uploaded them. No production-value whatsoever. Too much to do!

They still seem to convey the fun we had, however…

(It was a total blast !!! )

Ergo, it was this experience that dramatically altered the design of my simple window project with motors and sensors, in complexity for sure, but mostly in fun! Applying A.I. to solve this problem is most definitely overkill. Statistics could very easily and adequately be used, and a simple heuristic applied, but where’s the fun in that?

Plus, when I get it all to work, I can say I know a bit about A.I., and I will

When I’m done with this part of the project, I won’t even have to tell Alexa to do anything with my windows other than change the thermostat for the temperature in my office, or my whole home.

If I do it right, the system will know when I like it cooler or warmer, and when I’m expected to be home or occupying my office, so I can probably just keep quiet

at least most of the time…