tech explorers, welcome!

Author: TRW (Page 2 of 4)

The impact of a forest wildfire in 3D with geographical information on Cesium JS

After a few posts about the use of satellite information, let’s see how to bring it (almost) all together in a practical and impacting example. And it’s given the latest wildfires occurred in Spain that they’ve called my attention as I could not imagine how brutal they have been (although nothing compared to the ones in Chile, Australia or USA). But let’s do the exercise without spending too much GB in geographical information.

What I want is to show the extent of the fire occurred in Asturias in March, but also to show the impact removing the trees affected by the flames. Let’s do it!

Download data

I will need a Digital Surface Model (which includes trees and structures), an orthophoto taken during the fire, and a Digital Terrain Model (which has been taken away trees and structures) to replace the areas affected by the fire.

1. Terrain models

I'm going to use the great models from Spanish IGN, downloading the products MDS5 and MDT5 for the area.

http://centrodedescargas.cnig.es/CentroDescargas/index.jsp

2. Orthophotos

As per satellite imagery, I decided to go for Landsat this time, as it had a clearer view during the last days of the fire.

I will be using the images taken on 17th February 2023 (before the fire) and 6th April 2023 (yet in the last days).

https://search.earthdata.nasa.gov

Process satellite imagery

We'll use the process i.group from GRASS in QGIS to group the different bands captured by the satellite in a single RGB raster, as we saw in this previous post:

https://theroamingworkshop.cloud/b/en/1761/process-satellite-imagery-from-landsat-or-sentinel-2-with-qgis/

You'll have to do it for every region downloaded, four in my case, that will be joint later using Build virtual raster

1. True color image (TCI)

Combine bands 4, 3, 2.

2. False color image

Combine bands 5, 4, 3.

3. Color adjustment

To get a better result, you can adjust the minimum and maximum values considered in each band composing the image. These values are found in the Histogram inside the layer properties.

here you have the values I used for the pictures above::

BandTCI minTCI maxFC minFC max
1 Red-1001500-504000
2 Green01500-1002000
3 Blue-10120001200

Fire extent

As you can see, the false color image shows a clear extension of the fire. We'll generate a polygon covering the extent of the fire with it.

First, let's query the values in Band 1 (red) which offers a good contrast for values in the area of the fire. They are in the range 300-1300.

Using the process Reclassify by table, we'll assign value 1 to the cells inside this range, and value 0 to the rest.

Vectorize the result with process Poligonize and, following the satellite imagery, select those polygons in the fire area.

Use the tool Dissolve to merge all the polygons in one element; then Smooth to round the corners slightly.

Let's now get the inverse. Extract by extent using the Landsat layer, then use Difference with the fire polygon.

Process terrain

1. Combine terrain data

Join the same type model files in one (one for the DSM, and another one for the DTM). Use process Build virtual raster

2. Extract terrain data

Extract the data of interest in each model:

  • From the DSM extract the surface affected by the fire, so you'll remove all the surface in it.
  • Do the opposite with the DTM: keep the terrain (without trees) in the fire area, so it will fill the gaps in the other model.

Use the process Crop raster by mask layer using the layers generated previously.

Finally join both raster layers, so they fill each others' gaps, using Build virtual raster.

Bring it to life with Cesium JS

You should already have a surface model without trees in the fire area, but let's try to explore it in an interactive way.

I showed a similar example, using a custom Digital Terrain Model, as well as a recent satellite image, for the Tajogaite volcano in La Palma:

https://theroamingworkshop.cloud/b/1319/cesiumjs-el-visor-gratuito-de-mapas-en-3d-para-tu-web/

In this case I'll use Cesium JS again to interact easily with the map (follow the post linked above to see how to upload your custom files to the Cesium JS viewer).

For this purpose, I created a split-screen viewer (using two separate instances of Cesium JS) to show the before and after picture of the fire. Here you have a preview:

https://theroamingworkshop.cloud/demos/cesiumJSmirror/

I hope you liked it! here you have the full code and a link to github, where you can download it directly. And remember, share your doubts or comments on twitter!

🐦 @RoamingWorkshop

<html lang="en">
<head>
<meta charset="utf-8">
<title>Cesium JS mirror v1.0</title>
<script src="https://cesium.com/downloads/cesiumjs/releases/1.96/Build/Cesium/Cesium.js"></script>
<link href="https://cesium.com/downloads/cesiumjs/releases/1.96/Build/Cesium/Widgets/widgets.css" rel="stylesheet">
</head>
<body style="margin:0;width:100vw;height:100vh;display:flex;flex-direction:row;font-family:Arial">
<div style="height:100%;width:50%;" id="cesiumContainer1">
	<span style="display:block;position:absolute;z-Index:1001;top:0;background-color: rgba(0, 0, 0, 0.5);color:darkorange;padding:13px">06/04/2023</span>
</div>
<div style="height:100%;width:50%;background-color:black;" id="cesiumContainer2">
	<span style="display:block;position:absolute;z-Index:1001;top:0;background-color: rgba(0, 0, 0, 0.5);color:darkorange;padding:13px">17/02/2023</span>
	<span style="display:block;position:absolute;z-Index:1001;bottom:10%;right:0;background-color: rgba(0, 0, 0, 0.5);color:white;padding:13px;font-size:14px;user-select:none;">
		<b><u>Cesium JS mirror v1.0</u></b><br>
		· Use the <b>left panel</b> to control the camera<br>
		· <b>Click+drag</b> to move the position<br>
		· <b>Control+drag</b> to rotate camera<br>
		· <b>Scroll</b> to zoom in/out<br>
		<span><a style="color:darkorange" href="https://theroamingworkshop.cloud" target="_blank">© The Roaming Workshop <span id="y"></span></a></span>
    </span>
</div>
<script>
	// INSERT ACCESS TOKEN
    // Your access token can be found at: https://cesium.com/ion/tokens.
    // Replace `your_access_token` with your Cesium ion access token.

    Cesium.Ion.defaultAccessToken = 'your_access_token';

	// Invoke LEFT view
    // Initialize the Cesium Viewer in the HTML element with the `cesiumContainer` ID.

    const viewerL = new Cesium.Viewer('cesiumContainer1', {
		terrainProvider: new Cesium.CesiumTerrainProvider({
			url: Cesium.IonResource.fromAssetId(1640615),//get your asset ID from "My Assets" menu
		}),	  
		baseLayerPicker: false,
		infoBox: false,
    });    

	// Add Landsat imagery
	const layerL = viewerL.imageryLayers.addImageryProvider(
	  new Cesium.IonImageryProvider({ assetId: 1640455 })//get your asset ID from "My Assets" menu
	);
	
	// Hide bottom widgets
	viewerL.timeline.container.style.visibility = "hidden";
	viewerL.animation.container.style.visibility = "hidden";

    // Fly the camera at the given longitude, latitude, and height.
    viewerL.camera.flyTo({
      destination : Cesium.Cartesian3.fromDegrees(-6.7200, 43.175, 6000),
      orientation : {
        heading : Cesium.Math.toRadians(15.0),
        pitch : Cesium.Math.toRadians(-20.0),
      }
    });
    
    // Invoke RIGHT view

    const viewerR = new Cesium.Viewer('cesiumContainer2', {
		terrainProvider: new Cesium.CesiumTerrainProvider({
			url: Cesium.IonResource.fromAssetId(1640502),//get your asset ID from "My Assets" menu
		}),	  
		baseLayerPicker: false,
		infoBox: false,
    });    

	// Add Landsat imagery
	const layerR = viewerR.imageryLayers.addImageryProvider(
	  new Cesium.IonImageryProvider({ assetId: 1640977 })
	);
	
	// Hide bottom widgets
	viewerR.timeline.container.style.visibility = "hidden";
	viewerR.animation.container.style.visibility = "hidden";

    // Fly the camera at the given longitude, latitude, and height.
    viewerR.camera.flyTo({
      destination : Cesium.Cartesian3.fromDegrees(-6.7200, 43.175, 6000),
      orientation : {
        heading : Cesium.Math.toRadians(15.0),
        pitch : Cesium.Math.toRadians(-20.0),
      }
    });
    
    // Invoke camera tracker
    //define a loop
    var camInterval=setInterval(function(){

	},200);
    clearInterval(camInterval);
    
    document.onmousedown=trackCam();
    document.ondragstart=trackCam();
    
    //define loop function (read properties from left camera and copy to right camera)
    function trackCam(){
		camInterval=setInterval(function(){
			viewerR.camera.setView({
				destination: Cesium.Cartesian3.fromElements(
					  viewerL.camera.position.x,
					  viewerL.camera.position.y,
					  viewerL.camera.position.z
					),
				orientation: {
					direction : new Cesium.Cartesian3(
						viewerL.camera.direction.x,
						viewerL.camera.direction.y,
						viewerL.camera.direction.z),
					up : new Cesium.Cartesian3(
						viewerL.camera.up.x,
						viewerL.camera.up.y,
						viewerL.camera.up.z)
				},
			});
		},50);
	};
	//stop loop listeners (release mouse or stop scroll)
	document.onmouseup=function(){
		clearInterval(camInterval);
	};
	document.ondragend=function(){
		clearInterval(camInterval);
	};
	
	//keep the copyright date updated
	var y=new Date(Date.now());
	document.getElementById("y").innerHTML=y.getFullYear();
  </script>
</div>
</html>

Process LIDAR data in QGIS and create your own Digital Terrain Model

Sometimes a Digital Terrain Model (DTM) might not be detailed enough or poorly cleaned. If you have access to LIDAR data, you can generate a terrain model yourself and make the most of the raw information giving more detail to areas of interest. Let’s see how.

1. Data download

Point cloud

I'll use the awesome public data from the Spanish Geographical Institute (IGN) obtained with survey flights using laser measurements (LIDAR).

  1. Access LIDAR data from IGN Downloads Center.

http://centrodedescargas.cnig.es/CentroDescargas/index.jsp

  1. Draw a polygon of the area of interest.
  1. Download the files PNOA-XXXX...XXXX-RGB.LAZ. RGB uses true color; ICR, infra-red. But both are valid.

TIP! Download all files using the IGN applet. It's a .jnlp file that requires Java instaled on Windows or IcedTea on Linux (sudo apt-get install icedtea-netx)

2. Process LIDAR point cloud in QGIS

Direct visualization

From the latest versions (like 3.28 LTR Firenze), QGIS includes compatibility with point cloud files.

Just drag and drop the file to the canvas or, in the menu, Layer -> Add layer... -> Add point cloud layer...

You'll see the true color data downloaded, which you can classify in the Simbology Properties, choosing Clasification by data type:

3D view

Another default function coming with QGIS is 3D visualization of the information.

Let's configure the 3D properties of the LIDAR layer to triangulate the surface and get a better result.

Now, create a new view in the menu View -> 3D Map Views -> New 3D map view. Using SHIFT+Drag you can rotate your perspective.

LAStools plugin

To handle LIDAR information easily we'll use the tools from a plugin called LAStools, which you can install in the following way:

TIP! On Linux it's recommended to install Wine to use the .exe files directly, or otherwise you'll need to compile the binaries.

  1. Access LAStools' website and scroll to the bottom:

https://lastools.github.io/

  1. The full tool comes to a price, but you can access the public download to use the basic functions that we need.
  1. Unzip the compressed .zip file in a simple folder (without spaces or special characters)
  1. Now open QGIS, search in the plugins list for LAStools and install it.
  1. Finally, configure LAStools' installation folder (if it's different from the default C:/ ). The settings shown below work in Linux with Wine installed (using PlayOnLinux in my case).

Extract types of LIDAR data

Using LAStools we can extract information of the different data that makes up the point cloud. For example, we'll only extract the data classified as Suelo (soil) which is assigned to a value of 2.

With the process las2las_filter we'll create a filtered point cloud:

  • Select the .laz file to filter.
  • On filter, choose the option to keep_class 2
  • Leave the rest by default, and introduce 0 where the fields require a value
  • Finally, save the file with .laz extension in a known location to find it easily.

Once finished, just load the generated file and see the point cloud showing only ground data (with buildings and vegetation removed).

LIDAR to vector conversion

Now use the process las2shp to transform the point cloud into a vector format so you can operate easily with other GIS tools:

  • Choose the point cloud file just filtered.
  • Specify 1 point per record to extract every point of the cloud.
  • Save the file with .shp extension in a known location to find it easily.

And this will be your filtered point cloud in the classic vector format.

You can see that there is no specific field in the table of attributes. I'll create a new field ELEV to save the Z (height) coordinate and use it to generate a Digital Terrain Model.

3. Digital Terrain Model creation

Raster form vector point layer

Thanks to the integration of GRASS GIS, we can make use of powerful vector and raster processing tools. Let's use v.surf.idw to generate a regular grid from the interpolation of data in a point layer (in this case the values are weighted with the inverse of the distance but the are other spline algorithms).

  • Choose the vector point layer.
  • Choose the number of points to use for interpolation (in this case the data is quite dense so I'll choose 50). The more you choose, the softer the result will be, at the expense of losing the detail of the information density.
  • Leave the power with the value of 2, to use "square inverse distance".
  • Choose the data field used in the interpolation (ELEV).
  • Define the grid cell size. I choose 2 to compare the result with the 2m DTM product from IGN.

4. Result

Let's zoom out and see how it all finished:

And now let's see a bit more detail.

Apply the same color ramp to the generated DTM and to the IGN product. Overall, the result is very similar, with some differences in tree areas, being more reasonable in the processed layer.

And that's it! Any doubt or comment can be dropped on Twitter!

🐦 @RoamingWorkshop

🐢TorToiSe-TTS: AI text to speech generation

AI trends are here to stay, and today there’s much talk about Vall-E and its future combination with GPT-3. But we must remember that all these Artificial Intelligence products come from collaborative investigation that have been freely available, so we will always find an open-source equivalent, and we must be thankful about it.

That’s the case of TorToiSe-TTS (text to speech), an AI voice generator from written text totally free to use in your PC.

https://github.com/neonbjb/tortoise-tts

Just listen to a little sample below from Morgan Freeman:

Installation

The easiest is to use the cloud script in Google Collab uploaded by the developer:

https://colab.research.google.com/drive/1wVVqUPqwiDBUVeWWOUNglpGhU3hg_cbR?usp=sharing

You just need to sign in and click "play" ▶ to run each block of code.

But for sure you want to run TorToiSe-TTS locally, without internet, and save the audio in your local drive, so let's move on.

Installing python3

Like many AI applications, TorToiSe-TTS runs in python, so you need python3 in your PC. I always recommend the use of Linux, but you might be able to run it in a Windows terminal as well.

https://www.python.org/downloads/

On Linux, just install it from your distro repo:

sudo apt-get update
sudo apt-get install python3.11

You'll also need the module venv to virtualize a python environment (it uses to come with the Windows installer):

sudo apt-get install python3.11-venv

Download repository

Download the official repository:

https://github.com/neonbjb/tortoise-tts

Be it the compressed file:

Or using git:

git clone https://github.com/neonbjb/tortoise-tts

You can also download my fork repository, where I add further installation instructions, a terminal automatic launcher and some test voices for Ultron (yes, Tony Stark's evil droid) which we'll see later on.

https://github.com/TheRoam/tortoise-tts-linuxLauncher

Create a python virtual environment

Next we'll have to install a series of python modules needed to run TorToiSe-TTS, but before this, we'll create a virtual python version, so the installation of these modules won't affect the rest of the python installation. You'll find this helpful when you use different AI apps that use different versions of the same module.

Open a terminal and write the following, so you'll create a "TTS" environment:

cd tortoise-tts
python3 -m venv TTS

Activate it this way:

source TTS/bin/activate

And now you'll see a referente to the TTS environment in the terminal:

(TTS) abc@123:~$ |

Install python modules

Let's now install the required modules, following the collab indications:

https://colab.research.google.com/drive/1wVVqUPqwiDBUVeWWOUNglpGhU3hg_cbR?usp=sharing#scrollTo=Gen09NM4hONQ

pip3 install -U scipy
pip3 install transformers==4.19.0
pip3 install -r requirements.txt
python3 setup.py install

Now you can try running TorToiSe-TTS, but some libraries will fail, depending on your python installation:

python3 scripts/tortoise_tts.py "This is The Roaming Workshop" -v "daniel" -p "fast" -o "test1.wav"

Try the previous command until you don't get any errors, installing the missing moules. In my case, they were the following:

pip3 install torch
pip3 install torchaudio
pip3 install llvmlite
pip3 install numpy==1.23

Finally, this test1.wav sound like this in the voice of daniel (which turns out to be Daniel Craig):

Using TorToiSe-TTS

The most simple program in TorToiSe-TTS is found in the folder scripts/tortoise_tts.py and these are the main arguments:

python3 scripts/tortoise_tts.py "text" -v "voice" -V "route/to/voices/folder" --seed number -p "fast" -o "output-file.wav" 
  • "text": text chain that will be converted to audio
  • -v: voice to be used to convert text. It must be the name of one of the folders available in /tortoise/voices/
  • -V: specifies a folder for voices, in the case that you use a custom one.
  • --seed: seed number to feature the algorithm (can be any number)
  • -p: preset mode that determines quality ("ultra_fast", "fast", "standard", "high_quality").
  • -o: route and name of the output file. You must specify the fileformat, which is .wav

If you use my repo script TTS.sh you'll be asked for these arguments on screen and it will run the algorithm automatically.

Add your own voices

You can add more voices to TorToiSe-TTS. For example, I wanted to add the voice of Ultron, the Marvel supervillain, following the developer Neonbjb indications:

  • You must record 3 "clean" samples (without background noise or music) of about 10 seconds duration.
  • The format must be 16bits floating point WAV with 22500 sample rate (you can use Audacity)
  • Create a new folder inside /tortoise/voices (or anywhere, really) and save your recordings there.
  • When running TorToiSe-TTS, you'll need to call the voices folder with argument -V and the new voice with argument -v

For example, to use my Ultron recordings:

python3 sripts/tortoise_tts.py "This is The Roaming Workshop" -V "./tortoise/voices" -v "ultron-en" -p "fast" --seed 17 -o "./wavs/TRW_ultron3.wav"

Which sound like this:

I've taken cuts from a scene in Avengers: age of Ultron, both in English (ultron-en) and Spanish (ultron-es) which you can download from my repository.

"Ultron supervillain" by Optimised Stable Diffusion + Upscayl 2. The Roaming Workshop 2023.

Right now, the TorToiSe-TTS model is trained only in English, so it only woks properly in this language.

You can introduce text in another language, and AI will try to read it, but it will use the pronunciation learnt in English and it will sound weird.

If you are willing to train a model in your language (you need plenty GPU and several months), contact the developer, as he's keep to expand the project.

In the meantime, you can send any doubts or comments on 🐦 Twitter or 🐤 Koo!

🐦 @RoamingWorkshop

🐤 @TheRoamingWorkshop

Dalle-playground: AI image generation local server for low resources PCs

AI image generation has a high computational cost. Don’t trust the speed of Dall·E 2 API that we saw in this post; if these services are usually paid, it’s for a reason and, apart from the online services that we also saw, running AI in an average computer is not so simple.

After trying, in vane, some open-source alternatives like Pixray or Dalle-Flow, I finally bring the most simple of them: dalle-playground. This is a starting version of Dalle, so you won’t obtain the best of the results.

Despite this, I will soon bring an alternative to Dall·E (Stable Diffusion from stability.ai), which also supports a version optimized by the community for low resources PCs.

Requirements

"Computer hardware elements" by dalle-playground

Hardware

Just for you to picture it, Pixray recommends a minimum 16GB of VRAM and Dalle-Flow 21GB. VRAM or virtual RAM is the rapid access memory in your graphics card (don't confuse it with the usual RAM memory).

A standard laptop like mine has a Nvidia GeForce GTX1050Ti with 3GB dedicated VRAM, plus 8GB RAM on board.

With this minimum requirement, and some patience, you can run Dalle-playground locally in your PC, although it also requires an internet connection to check for updated python modules and AI checkpoints.

If you have one or several more powerful graphic cards, I would recommend trying Pixray, as it installs relatively easy and it's well documented and extended.

https://github.com/pixray/pixray#usage

Software

Software requirements aren't trivial either. You'll need python and Node.js. I will show the main steps for Linux, which is more flexible when installing all kinds of packages of this kind, but this is equally valid for Windows or Mac if you manage yourself on a terminal or using docker.

Download dalle-playground

I found this repository by chance, just before it was updated for Stable Diffusion V2 (back in November 2022) and I was smart enough to clone it.

Access and download all the repository from my github:

https://github.com/TheRoam/dalle-playground-DalleMINI-localLinux

Or optionally download the original repository with Stable Diffusion V2, but this requires much more VRAM:

https://github.com/saharmor/dalle-playground/

If you use git you can clone it directly from the terminal:

git clone https://github.com/TheRoam/dalle-playground-DalleMINI-localLinux

I renamed the folder locally to dalle-playground.

Install python3 and required modules

All the algorithm works in python in a backend. The main repository only mentions the use of python3, so I assume that previous versions wont work. Check your python version with:

>> python3 -V
Python 3.10.6

Or install it from its official source (it's currently on version 3.11, so check which is the latest available for your system):

https://www.python.org/downloads/

Or from your Linux repo:

sudo apt-get install python3.10

You'll also need the venv module to virtualize dalle-playground's working environment so it won't alter the whole python installation (the following is for Linux as it's included in the Windows installer):

sudo apt-get install python3.10-venv

In the backend folder, create a python virtual environment, which I named after dalleP:

cd dalle-playground/backend
python3 -m venv dalleP

Now, activate this virtual environment (you'll see that the name appears at the start of the terminal line):

(dalleP) abc@123: ~dalle-playground/backend$

Install the remaining python modules required by dalle-playground which are indicated in the file dalle-playground/backend/requirements.txt

pip3 install -r requirements.txt

Apart from this, you'll need pyTorch, if not installed yet:

pip3 intall torch

Install npm

Node.js will run a local web server which will act as an app. Install it from the official source:

https://nodejs.org/en/download/

Or from your Linux repo:

sudo apt-get install npm

Now move to the frontend folder dalle-playground/interface and install the modules needed by Node:

cd dalle-playground/interface
npm install

Launch the backend

With all installed let's launch the servers, starting with the backend.

First activate the python virtual environment in the folder dalle-playground/backend (if you just installed it, it should be activated already)

cd dalle-playground/backend
source dalleP/bin/activate

Launch the backend app:

python3 app.py --port 8080 --model_version mini

The backend will take a couple of minutes (from 2 to 5 minutes). Wait for a message like the following and focus on the IP addresses that appear at the end:

--> DALL-E Server is up and running!
--> Model selected - DALL-E ModelSize.MINI
 * Serving Flask app 'app' (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:8080
 * Running on http://192.168.1.XX:8080

Launch frontend

We'll now launch the Node.js local web server, opening a new terminal:

cd dalle-playground/interfaces
npm start

When the process finishes, it will launch a web browser and show the graphical interface of the app.

Automatic launcher

In Linux you can use my script launch.sh which starts backend and frontend automatically following the steps above. Just sit and wait for it to load.

launch.sh

#!/bin/bash
#Launcher for dalle-playground in Linux terminal

#Launchs backend and frontend scripts in one go
$(bash ./frontend.sh && bash ./backend.sh &)

#Both scripts will run in one terminal.
#Close this terminal to stop the programm.

backend.sh

#!/bin/bash
#Backend launcher for dalle-playground in Linux terminal

#move to backend folder
echo "------ MOVING TO BACKEND FOLDER ------"
cd ./backend

#set python virtual environment
echo "------ SETTING UP PYTHON VIRTUAL ENVIRONMENT ------"
python3 -m venv dalleP
source dalleP/bin/activate

#launch backend
echo "------ LAUNCHING DALLE-PLAYGROUND BACKEND ------"
python3 app.py --port 8080 --model_version mini &

frontend.sh

#!/bin/bash
#Frontend launcher for dalle-playground in Linux terminal

#move to frontend folder
echo "------ MOVING TO FRONTEND FOLDER ------"
cd ./interface

#launch frontend
echo "------ LAUNCHING DALLE-PLAYGROUND FRONTEND ------"
npm start &

App dalle-playground

In the first field, type the IP address for the backend server that we saw earlier. If you're accessing from the same PC, you can use the first one:

http://127.0.0.1:8080

But you can access from any other device in your local network using the second one:

http://192.168.1.XX:8080

Now introduce the image description to be generated in the second field, and choose the number of images to show (more images will take longer).

Press [enter] and wait for the image to generate (about 5 minutes per image).

And there you have your first local AI generated image. I will include a small gallery of results below. And in the next post I will be showing how to obtain better results using Stable Diffusion, also for lower than 4GB VRAM.

You know I await your doubts and comments on 🐦 Twitter!

🐦 @RoamingWorkshop

Note: original images at 256x256 pixels, upscaled using Upscayl.

Sonoff D1 Dimmer: configuring local HTTP API (DIY mode) and assembling for external connection.

I had a Shelly Dimmer inside a plug box in the wall, but one good day it stopped working (probably because of high temperatures, as it stands up to 35ºC). Looking for an alternative, I found Sonoff had released their equivalent for 1/3 the price of Shelly.

But in the end, cheap turns expensive, as it is much more complicated to configure than Shelly Dimmer and it has a bigger size.

After many tests, and given the poor documentation, here I explain how to configure Sonoff D1 Dimmer to use the local API without depending on the e-weLink app.

Additionally, given its size, you wont find much space for it in your connection boxes, so I’ll give you the idea to craft an external connection expansor.

Requirements

  • Sonoff D1 Dimmer.
  • Two wire electrical cable (line and neutral).
  • Female plug socket.
  • Male plug.
  • Sonoff RM-433 remote controller (very recommended).
  • Assembling material for a case (3D printer, plywood, etc).

Electrical connection

The first thing you need to achieve is to connect the D1 to the 220V domestic network, following the schematic given in the user manual:

https://sonoff.tech/wp-content/uploads/2021/03/%E8%AF%B4%E6%98%8E%E4%B9%A6-D1-V-1.1-20210305.pdf

The previous schematic more or less complies with European norm:

  • Line (positive): black, brown or grey (red in this case...)
  • Neutral (negative): blue.

Shelly Dimmer is much more compact and fits easily in a connection box. But not in this case, so I will connect it externally using an extension lead, and I will later detail a simple case for its assembly.

TIP! If you're not experienced in electricity, you should review quite a bit and move forward with caution. It's not nice to have a shock with the domestic network. If you do the connection externally this way you won't be in much danger.

For the moment we can now make it work.

Internet connection

This is the complicated bit, as with so much casing, apparently there was no place for the usual pushbutton to power on/off and restore the device.

If you're lucky, your Sonoff wont be preconfigured and you might be able to connect to it on the first attempt. If it's preconfigured, probably to check its operation in another network, the device is no longer accessible even with the e-weLink app, unless you are in the network where it was configured.

To detect it, you must restore to default settings and for this you have two options:

  • Restore using e-weLink app from the network where it was configured (very unlikely you have access to it).
  • Restore using Sonoff RM-433 remote controller (you'll end up buying this extra accessory).

Pairing Sonoff RM-433 remote controller

In the end, the cheap D1 price has doubled with the need to buy the RM-433 remote controller, but the price is still not mad. Here is its manual:

https://sonoff.tech/wp-content/uploads/2021/03/%E8%AF%B4%E6%98%8E%E4%B9%A6-RM433-V1.1-20210305.pdf

The first thing to do is to pair the controller with the D1:

  1. Connect the D1 to a socket.
  2. Hold button 7 for some 5 seconds, until you hear a beep (this removes the previous radio-frequency assignment).
  3. Unplug and plug the D1 to get it restarted.
  4. Press any button on the controller so it's assigned to the D1.
  5. You'll hear another beep and the controller is now paired and can be used to control the D1.

Restore WIFI network

Now you need to restore the network assigned to the D1.

Hold button 8 for some 5 seconds, or basically, until the led starts blinking this way:

Breathing mode. Two fast blinks, one slow blink.

You removed the previous network. Now set it to pairing mode.

Again, hold button 8 for some 5 seconds, or until the led starts blinking continuously:

Pairing mode. Constant blinking.

This way, the device starts a WIFI Access Point (WIFI AP) with a name in the form ITEAD-XXXXXXXXXX.

Pairing with e-weLink

From here, if you want the easy route, just download the e-weLink app and press the quick pairing button. You'll then have your D1 accessible from this app.

Pairing in DIY mode

But I want the complicated way and enable DIY mode to access the device network and control it using commands from the HTTP API in a web app.

We need to find the WIFI network named ITEAD-XXXXXXXXXX set up by the device and connect to it using the password 12345678.

Now open a web browser and access this address http://10.10.7.1 where you'll find the following screens.

Introduce the name (SSID) and password of your WIFI network, and the device is now linked to it.

Assembly

Before getting into the detail of the HTTP API, I'll show you a 3D printed case design to avoid the cables and connections being completely exposed.

It consists of two PLA pieces (base and top) which can be screwed together and which you can download from this server:

https://theroamingworkshop.cloud/demos/D1case_v1_base.stl

https://theroamingworkshop.cloud/demos/D1case_v1_top.stl

You can also preview here:

D1 HTTP API usage

To use the command of the HTTP API you must know the device IP in your local network.

Find IP in the router

You can access the network map in your router, usually from the address http://192.168.1.1

The address and the password should be in some sticker in your router. Then you'll see your device with a name like ESP-XXXX which derives from the WIFI module it holds (I already renamed it here):

Find IP using nmap

In a terminal you can use the tool nmap to scan your local network.

  • Download it if not done yet:
    sudo apt-get update
    sudo apt-get install nmap
  • Scan your network (using sudo you'll get the MAC address, which is useful as the IP could change when restarting the router)
    sudo nmap -v -sn 192.168.1.0/24

Send HTTP requests to the D1

Sonoff's D1 HTTP API is documented in their website:

https://sonoff.tech/sonoff-diy-developer-documentation-d1-http-api/

In order to communicate with the device, you need to send HTTP requests using some software like Postman or using curl or wget in a terminal.

The request is sent to the device IP, to the default port 8081, and we also have to include the device id in the request body (this id matches the XXXXXXXXXX coding in the WIFI network name ITEAD-XXXXXXXXXX).

Let's see some use cases with curl and Postman.

Device information

http://[ip]:[port]/zeroconf/info

  • curl

curl -X POST 'http://192.168.1.34:8081/zeroconf/info' --data-raw '{"deviceid": "XXXXXXXXXX","data": {}}'

  • Postman
  • Response
{
    "seq": 6,
    "error": 0,
    "data": {
        "deviceid": "XXXXXXXXXX",
        "switch": "off",
        "startup": "off",
        "brightness": 60,
        "brightMin": 0,
        "brightMax": 100,
        "mode": 0,
        "otaUnlock": false,
        "fwVersion": "3.5.0",
        "ssid": "TU_RED_WIFI",
        "bssid": "XX:XX:XX:XX:XX:XX",
        "signalStrength": -58
    }
}

Turn on/off

http://[ip]:[port]/zeroconf/switch

  • curl

curl -X POST 'http://192.168.1.34:8081/zeroconf/switch' --data-raw '{"deviceid": "XXXXXXXXXX","data": {"switch":"on"}}'

  • Postman
  • Response
{
    "seq": 9,
    "error": 0
}

Brightness adjustment

http://[ip]:[port]/zeroconf/dimmable

  • curl

curl -X POST 'http://192.168.1.34:8081/zeroconf/dimmable' --data-raw '{"deviceid": "XXXXXXXXXX","data": {"switch":"on","brightness":50,"mode":0,"brightmin":0,"brightmax":100}}'

  • Postman
  • Response
{
    "seq": 14,
    "error": 0
}

Now you're ready to program your own app and control your D1 to your like in a completely private way. I hope this was useful, but if you find any doubts or comments, don't hesitate to drop them on Twitter 🐦!

🐦 @RoamingWorkshop

Temps-i 7: DIY desktop clock with WIFI, temperature-pressure sensor and +48h autonomy.

After lots of comings and goings, trials, redesigns, burnts, cuts and some minor explosions, finally I can bring a smart desktop clock I have been working on for the last 2 years. Making a detailed tutorial can be even longer and tedious, so I hope these traces can help make your own. I warn you this takes a lot of time and practice, and can’t be done carelessly…

What's Temps-i 7?

Let's break down its name:

  • Temps mean time in the Valencia dialect,
  • i for internet, where it gets the time,
  • 7 for the display, which uses 7 segments for every digit.

These three concepts define this compact desktop clock, with WIFI connectivity, temperature sensor and good autonomy.

Let's see how it's made!

Components

First, let's see the recipe ingredients and what each one does:

  • Sparkfun ESP32-Thing board
    >>Top performance microcontroller with WIFI and Bluetooth connectivity thanks to the ESP32 integrated chip, ideal for IoT projects.
  • 4 digits 7 segments red color display.
    >>Shows time and temperature.
  • BMP-280 module.
    >>Temperature and barometric pressure compact digital sensor.
  • 100 Ohm resistors
    >>Needed to reduce display current, without lowering brightness excessively (admits up to 1k Ohm, but leds would be hardly visible with daylight).
  • PCB with custom circuit.
    >>Simplifies display and resistors connectiont to the microcontroller.
  • 1000mAh 3.7v LiPo battery
    >>Ensures an autonomy up to 48 hours without external voltage.
  • Jumper cables.
    >>For additional connections of external modules.
  • Push button
    >>Used to switch the program on display.

Electronic design

It's very important to study the electronic components being used, read all the specifications and prototype with all precautions before we start soldering like crazy.

Component selection

The components listed earlier are not a mere coincidence or copied from elsewhere. They're the most successful trial of many others and meets the project needs:

  • The 4 digit display shows exactly what I'm after: the time. If I can also use it to show temperature, that's fine. But a better quality display, like LCD, would be unnecessarily demanding, and autonomy is another key requirement.
  • The microcontroller includes internet connectivity, as well as enough computing capacity. It also has sufficient in/out pins to control the display without a gpio expansor. Some other options I've tried:
    • More compact microcontrollers: Teensy 4, Digispark, SparkFun Pro Micro. They need a GPIO expansor (like PCF8574) and/or a WIFI module (like ESP-01). This also involves too many more connections.
    • Microcontrollers integrating WIFI and sufficient I/O, like NodeMCU ESP8266. Got out-dated and lacks processing capacity as the counter delayed almost 4 seconds every minute.

Prototyping electronic circuit

Having researched and obtained the components, connect them in a prototype board (protoboard) to test their operation.

In my case, after different pin combinations, the most organised way is the following:

ESP32-ThingComponent
VBATLiPo +
3V3BMP-280 3V3
GNDLiPo -
BMP GND
BMP SD0
Push -
GPIO21BMP SDA
GPIO04BMP SCL
GPIO32Push +
GPIO17Display Digit 1
GPIO23Display Digit 2
GPIO19Display Digit 3
GPIO25Display Digit 4
GPIO15Display Segment A
GPIO22Display Segment B
GPIO27Display Segment C
GPIO12Display Segment D
GPIO13Display Segment E
GPIO18Display Segment F
GPIO26Display Segment G
GPIO14Display Segment P (dot)

Pinout schematic

Once the prototype is achieved, you should save it in a schematic diagram using EDA software (electronic design automation) like Kicad, where you can also generate a PCB design that can be sent for manufacturing. Otherwise, you can always save the schematic in paper, as you wont remember where every cable was going in a couple months time...

Kicad is a bit tricky and it's good to practice with simpler projects. Despite this, it's quite manageable for medium users as it basically consists of searching and choosing symbols for our components and connect their pins accordingly to specifications.

To avoid messing the sketch up with cables, I used names in every connection, which is also valid in Kicad. Also, you'll see that the ESP32-Thing is made up of two 20x pin headers, as I didn't find a working symbol and didn't have time to design one properly. What really matters is that the design is working and coherent with reality.

Kicad shematic

Next step is to assign footprints that are realistic for each symbol, so then we can design a printed circuit board that we can order (usually in China) for 15€ / 5 boards.

You don't need to go crazy on this, specially if you're not experienced. I only need to make soldering connections simpler, as in this case you need about 90 of them but keeping a compact design.

Clock programming

Most microcontrollers, like the ESP32-Thing, are compatible with the Arduino IDE, which makes it simpler to connect the board to a PC and load a clock program.

Before starting, it's important to make a list of tasks and functions that we want to include in the program and modify it as we code. This way you can try different functions separately and debug every step to find errors quickly. In my case, and after many trials, the program will consist of the following:

  1. Define libraries and variables.
  2. Configure pins.
  3. Connect WiFi.
  4. Get date via SNTP.
  5. Disconnect and turn off WiFi (saves battery).
  6. Convert date into digits.
  7. Show digits on display.
  8. Start timer.
  9. Start reading program change pin.
  10. Change program on pushbutton activation.
    1. Read sensors.
    2. Show temperature on display.
  11. Update time after timer ending (every minute).
  12. Restart timer.

I don't want to spend too long on the code, and it's also not the most tidy I have, but here it is for anyone who wants to copy it, and also on github:

https://github.com/TheRoam/Tempsi-7

//----------------------------------------------------------------//
//    Temps-i 7 WiFi clock and temperature in 4 digit display     //
//  v 1.0.1                                                       //
//                                                                //
//  Interfaces:                                                   //
//  - Sparkfun ESP32-Thing micocontroller                         //
//  - BMP-280 temperature and pressure digital sensor             //
//  - 4 digit 7 segment display                                   //
//  - Programm push button                                        //
//  - LiPo Battery                                                //
//                                                                //
//  Detailed documentation:                                       //
//  https://theroamingworkshop.cloud                              //
//                                                                //
//                © THE ROAMING WORKSHOP 2022                     //
//----------------------------------------------------------------//
#include < esp_wifi.h >
#include < WiFi.h >
#include < WiFiMulti.h >
#include "time.h"
#include "sntp.h"
#include < Wire.h >
#include < Adafruit_BMP280.h >
// Spaces inside <> only shown for correct html display. Remove them in Arduino IDE
//BMP280 sensor using I2C interface
Adafruit_BMP280 bmp;
#define BMP_SCK  (4)
#define BMP_MISO (21)
#define BMP_MOSI (4)
#define BMP_CS   (21)
//Sensor variables
float TEMP=0;     //temperature variable
float ALT=0;      //altitude variable
float PRES=0;     //pressure variable
float hREF=1020.0;//sea level reference pressure in hPa
//define time variables
RTC_DATA_ATTR long long TIME=0; //concatenated time
RTC_DATA_ATTR long long d=0;  //day
RTC_DATA_ATTR long long m=0;  //month
RTC_DATA_ATTR long long Y=0;  //year
RTC_DATA_ATTR long long H=0;  //hour
RTC_DATA_ATTR long long M=0;  //minute
RTC_DATA_ATTR long long S=0;  //second
RTC_DATA_ATTR uint32_t dS=0;  //seconds counter for dot
RTC_DATA_ATTR struct tm timeinfo; //saves full date variable
long long inicio=0; //saves start time
long long ahora=0;  //saves current time
//Define digit pins in an array, in display order, for looping
//Numbers match ESP32-Thing GPIO number
int DigPins[4]{
  17,// first digit (GPIO 17)
  23,//second digit (GPIO 23)
  19,//third digit  (GPIO 19)
  25//fourth digit  (GPIO  25)
};
//Define segment pins
//Numbers match ESP32-Thing GPIO number
int SegPins[8]{
  14,   //P
  26,   //g
  18,   //f
  13,   //e
  12,   //d
  27,   //c
  22,   //b
  15    //a
};
//Auxiliary variables
//Numbers match ESP32-Thing GPIO number
int ProgPin=32;   //Pin no used for program
int ButtonStatus=1;
int ledPin=5;     //Used to blink the ESP32-Thing blue led
int ProgNum=-1;   //Define a variable to keep track of current program number
// WIFI
// Define your wifi network and credentials
char ssid1[] = "YOUR_WIFI_SSID1";
char pass1[] = "YOUR_WIFI_PASS1";
char ssid1[] = "YOUR_WIFI_SSID2";
char pass1[] = "YOUR_WIFI_PASS2";
WiFiMulti wifiMulti;
// NTP variables
// Update time zone if needed
const char* ntpServer1 = "pool.ntp.org";
const char* ntpServer2 = "time.nist.gov";
const long  gmtOffset_sec = 3600;
const int   daylightOffset_sec = 0; //this will be corrected later with software
const char* time_zone = "CET-1CEST,M3.5.0,M10.5.0/3";  // TimeZone rule for Europe/Rome including daylight adjustment rules (optional)
// Displayed characters in every digit are a byte array indicating ON (1) and OFF (0) segments
// Use a wiring pattern that matches an understandable byte chain so you can make them up easily
// Segment/byte pattern: Babcdefgp --> where 1 is HIGH (ON) and 0 is LOW (OFF)
// Refer to a character by calling this array, i.e.:
//  - call number 3 by calling ns[3]
//  - call letter A by calling ns[20]
//  - call underscore symbol (_) by calling ns[49]
byte ns[50]{ // Array Position - Byte character
  B11111100,// 0-0
  B01100000,// 1-1
  B11011010,// 2-2
  B11110010,// 3-3
  B01100110,// 4-4
  B10110110,// 5-5
  B10111110,// 6-6
  B11100000,// 7-7
  B11111110,// 8-8
  B11110110,// 9-9
  B11111101,// 10-0.
  B01100001,// 11-1.
  B11011011,// 12-2.
  B11110011,// 13-3.
  B01100111,// 14-4.
  B10110111,// 15-5.
  B10111111,// 16-6.
  B11100001,// 17-7.
  B11111111,// 18-8.
  B11110111,// 19-9.
  B11101110,// 20-A
  B00111110,// 21-b
  B10011100,// 22-C
  B01111010,// 23-d
  B10011110,// 24-e
  B10001110,// 25-f
  B10111100,// 26-G
  B00101110,// 27-h
  B00001100,// 28-I
  B11111000,// 29-J
  B01101110,// 30-K(H)
  B00011100,// 31-L
  B00101010,// 32-m(n)
  B00101010,// 33-n
  B00111010,// 34-o
  B11001110,// 35-P
  B11100110,// 36-q
  B00001010,// 37-r
  B10110110,// 38-S
  B00011110,// 39-t
  B01111100,// 40-U
  B00111000,// 41-v
  B00111000,// 42-w(v)
  B01101110,// 43-X(H)
  B01110110,// 44-y
  B11011010,// 45-Z
  B00000001,// 46-. (dot)
  B11000110,// 47-* (astherisc)
  B00000010,// 48-- (hyphon)
  B00010000,// 49-_ (underscore)
};
//array to store displayed digits
int digits[4];
//digit calculation variables
int first_digit = 0;
int second_digit = 0;
int third_digit = 0;
int fourth_digit = 0;
//counters for looping digits and current number
int dig=0;
int n=0;
int dot=1;
void setup()
{
  // Start disabling bluetooth and WIFI to save energy
  // Just power WIFI later, when needed.
  esp_err_t esp_bluedroid_disable(void);
  esp_err_t esp_bt_controller_disable(void);
  WiFi.disconnect(true);
  WiFi.mode(WIFI_OFF);
  
  // Start serial communication for any debug messages
  // Commented for production; uncomment for debugging
  //Serial.begin(115200);
  //Define I2C pins, as we are not using standard ones
  Wire.begin(21,4);
  // Activate digit pins looping the pin array.
  for (dig=0; dig<4; dig++){
    pinMode(DigPins[dig], OUTPUT);
  }
  for (dig=0; dig<4; dig++){  //set them LOW (turn them OFF)
    digitalWrite(DigPins[dig], LOW);
  }
  // Activate segment pins
  for (int i=0; i<8; i++){
    pinMode(SegPins[i],OUTPUT);
  }
  // Activate LED pin
  pinMode(ledPin, OUTPUT);
  // Activate PROG pin
  pinMode(ProgPin, INPUT_PULLUP);
  // Turn ON Wifi
  wifiON();
  
  // Setup NTP parameters
  sntp_set_time_sync_notification_cb( timeavailable );
  sntp_servermode_dhcp(1);
  configTime(gmtOffset_sec, daylightOffset_sec, ntpServer1, ntpServer2);
  //wait for date
  do{
    delay(100);
  }while(TIME==0);
  delay(500);
  //WIFI can be turned off now
  wifiOFF();  
  // Setup BMP280
  unsigned status;
  //BMP-280 I2C Address:
  // 0x76 if SD0 is grounded
  // 0x77 if SD0 is high
  status = bmp.begin(0x76);
  if (!status) {
    Serial.println(F("Could not find a valid BMP280 sensor, check wiring or "
                      "try a different address!"));
    Serial.print("SensorID was: 0x"); Serial.println(bmp.sensorID(),16);
    Serial.print("        ID of 0xFF probably means a bad address, a BMP 180 or BMP 085\n");
    Serial.print("   ID of 0x56-0x58 represents a BMP 280,\n");
    Serial.print("        ID of 0x60 represents a BME 280.\n");
    Serial.print("        ID of 0x61 represents a BME 680.\n");
    ESP.restart();
    while (1) delay(10);
  }
  //Default settings from datasheet.
  bmp.setSampling(Adafruit_BMP280::MODE_NORMAL,     // Operating Mode.
                  Adafruit_BMP280::SAMPLING_X2,     // Temp. oversampling
                  Adafruit_BMP280::SAMPLING_X16,    // Pressure oversampling
                  Adafruit_BMP280::FILTER_X16,      // Filtering.
                  Adafruit_BMP280::STANDBY_MS_500); // Standby time.
  //get initial readings
  readSensors();
  
  // Get correct time
  sync_clock();
  num_shift();
  // Set start time
  inicio = millis();
  // Set dot time
  dS=millis();
}
void loop(){
  // After setup, this function will loop until shutdown.
  
  // Start time counter using chip's milisecond counter
  uint32_t ahora=millis();
  
  // Check if the counter has reached 60000 miliseconds.
  // Calibrate if you realize that time desyncs
  // This depends on processor calculation, which can vary with input voltage (if on batteries) and temperature.
  while ( (ahora-inicio+(S*1000)) < 59500){
    // Run dot blinker when not showing temperature
    //commented as it sometimes desyncs the hour digit (needs fixing)
    /*if(ProgNum==-1){
      dot_blinker();
    }*/
    
    // Reading push button:
    // when pressed change status
    if(digitalRead(ProgPin)==1){
      ButtonStatus=1;
    }
    // when released after press, set "program change" status
    // this avoids constant change when holding
    if(digitalRead(ProgPin)==0 && ButtonStatus==1){
      ButtonStatus=2;
    }
    // Status 2 -> Programm change
    if(ButtonStatus==2){
      //reset status
      ButtonStatus=0;
      //update programm number
      ProgNum=-ProgNum;
      Serial.println(ProgNum);
      //change to temp
      if(ProgNum==1){
       //light LED
       digitalWrite(ledPin, HIGH); 
       //save TIME
       TIME=digits[0]*1000+(digits[1]-10)*100+digits[2]*10+digits[3];
       //show temp
       room_temp();
      }else if(ProgNum==-1){  //change to time
        //turn off LED
        digitalWrite(ledPin, LOW);
        //restore time
        split_time(TIME);
      }
    }
    
    //update time counter
    ahora=millis();
    num_shift();
    delay(6);
  }
  // End of while() after 60 seconds
  // Reset initial time
  S=0;
  inicio=millis();
  //make sure there's dot at the end
  if(digits[1]<=10){
    digits[1]+10;
    dot=0;
  }
  //update time
  //if program is in temperature, change to time
  if(ProgNum==1){
    split_time(TIME);
    ProgNum=-1;
  }
  updateMinutes();
}
//-WIFI function
//--Setup and turn Wifi ON
void wifiON(){
  // Set up WIFI (ESP32 Thing only working with wifiMulti library)
  Serial.println("Conectando");
  wifiMulti.addAP(ssid1,pass1);
  //wifiMulti.addAP(ssid2,pass2);
  int led=1;
  int boot=1;
  wifiMulti.run();
  //WiFi.disconnect(true);
  
  while((WiFi.status() != WL_CONNECTED)){
    digitalWrite(ledPin, HIGH);
    delay(500);
    digitalWrite(ledPin,LOW);
    delay(500);
    wifiMulti.run();
  }
  // Make sure LED is off when finished
  digitalWrite(ledPin,LOW);
  // When CONNECTED, while loop ends.
  Serial.print("Conectado a ");
  Serial.println(WiFi.SSID());
  Serial.println(WiFi.localIP());
}
//-Disable WIFI
void wifiOFF(){
  //turn WIFI off as it's not needed any more
  WiFi.disconnect(true);
  WiFi.mode(WIFI_OFF);
  //disable wifi (deinit clears all flash data)
  esp_wifi_deinit();
}
//----BMP280 functions
//-Read sensor data
void readSensors(){
  TEMP=bmp.readTemperature();
  Serial.println("T: "+(String)TEMP+"ºC");
  PRES=bmp.readPressure();
  Serial.println("P: "+(String)PRES+"hPa");
  ALT=bmp.readAltitude(hREF);
  Serial.println("h: "+(String)ALT+"msnm");
}
//----NTP functions
//-Callback function (get's called when time adjusts via NTP)
void timeavailable(struct timeval *t)
{
  Serial.println("Got time adjustment from NTP!");
  simpleTime();
}
void simpleTime()
{ 
  if(!getLocalTime(&timeinfo)){
    Serial.println("No time available (yet)");
    return;
  }
  Serial.print("Synced time: ");
  Serial.println(&timeinfo, "%A, %B %d %Y %H:%M:%S");
  //save time
  sync_clock();
}
//-
//PROGRAM #0: HELLO at startup
//function to say "HOLA" at start up while connecting to WiFi
void ini_HOLA(){
  int h=30;
  int o=34;
  int l=31;
  int a=20;
  digits[0]=46;
  digits[1]=46;
  digits[2]=46;
  digits[3]=46;
}
//PROGRAM #1: WiFi synced time
//function that gets current time via WiFi
void sync_clock(){
  Y=timeinfo.tm_year;
    TIME=TIME+Y*10000000000;
  m=timeinfo.tm_mon;
  m=m+1;
    TIME=TIME+m*100000000;
  d=timeinfo.tm_mday;
    TIME=d*1000000;
  H=timeinfo.tm_hour;
  //check daylight saving time and correct hour
  //"27 Mar (03 27) +1 hour; 30 Oct (10 30) back -1 hour"
  if( ( m*100+d >= 327 ) && ( m*100+d < 1030 ) ){
    H=H+1;
  }else{
    H=H;
  }
  TIME=TIME+H*10000;
  M=timeinfo.tm_min;
    TIME=TIME+M*100;
  S=timeinfo.tm_sec;
    TIME=TIME+S;
    
  //split current 4 digit time into sigle digits
  split_time((TIME/100)-((TIME/100)/10000)*10000);
}
//number splitting function to separate time string into digits
void split_time(long long num) {
  first_digit = num / 1000;
  digits[0] = first_digit;
  int first_left = num - (first_digit * 1000);
  second_digit = first_left / 100;
  digits[1] = second_digit;
  //añadimos el segundero fijo (sumamos 10)
  digits[1] = digits[1]+10;
  int second_left = first_left - (second_digit * 100);
  third_digit = second_left / 10;
  digits[2] = third_digit;
  fourth_digit = second_left - (third_digit * 10);
  digits[3] = fourth_digit;
}
// number shifting function
void num_shift(){
  for (dig=0; dig<4; dig++){// turn digits off
    digitalWrite(DigPins[dig], HIGH);
  }
  
  //turn them ON (LOW) one by one
    digitalWrite(DigPins[n], LOW);
    for(int seg=7; seg>=0; seg--){
      //read byte array for digit
      int x = bitRead(ns[digits[n]],seg);
      //turn the segments ON or OFF
      digitalWrite(SegPins[seg],x);
    }
    n++;// move to next no.
    if (n==4){// if no. is 4, restart
      n=0;
    }
}
//Getting room temperature from LM35 sensor via NodeMCU analog input pin (ADC)
void room_temp(){
  readSensors();
  //This will return temperature in XY.Z format
  //We don't want the integer value, so we can use the four digits to display temperature units as well "XYºC"
  //Getting first digit in byte format ns[X] where X=int(XY.Z/10)=int(X.YZ)=X
  digits[0]=int(TEMP/10)-int(TEMP/100)*10;
  //Getting second digit in byte format ns[Y] where Y=int(XY.Z)-X*10=int(XY.Z)-int(XY.Z/10)*10
  digits[1]=int(TEMP/1)-int(TEMP/10)*10;
  //Setting temperature units as degree celsius (ºC) in byte format
  digits[2]=47; //astherisc *
  digits[3]=22; //character C
}
void updateMinutes(){
  //add 1 to the last digit
  digits[3]=digits[3]+1;
  //if greater than 9, reset to 0
  if (digits[3]>9){
    digits[3]=0;
    //then add 1 to the third digit
    updateM0();
  }
  //send last digit to display
  digitalWrite(DigPins[3], LOW);
}
void updateM0(){
  //add 1 to third digit
  digits[2]=digits[2]+1;
  //if greater than 5, reset to 0
  if (digits[2]>5){
    digits[2]=0;
    //then add 1 to current hours
    updateH1();
  }
  //send digit to display
  digitalWrite(DigPins[2], LOW);
}
void updateH1(){
  //add 1 hour
  digits[1]=digits[1]+1;
  // if greater than 19, reset to 10
  // (instead of number 0-9, we use numbers 10-19 in order to add the "dot" to the display)
  if (digits[1]>19){
    digits[1]=10;
    //then add 1 to the first digit
    updateH0();
  }// reset when it's 24h (go back to 00)
  if(digits[1]>13 && digits[0]==2){
    digits[0]=0;
    digits[1]=10;
  }
  //display digits
  digitalWrite(DigPins[1], LOW);
  digitalWrite(DigPins[0], LOW);
}
void updateH0(){
  //add 1 to first digit
  digits[0]=digits[0]+1;
  //if greater than 2, reset to 0 (it shouldn't happen as we reset earlier, but just in case..)
  if (digits[0]>2){
    digits[0]=0;
  }
  //display digit
  digitalWrite(DigPins[0], LOW);
}
void dot_blinker(){
  //every second, blink the dot
    if( millis()-dS > 1000 && dot == 1){
      digits[1]=digits[1]-10;
      dS=millis();
      dot=0;
    }
    if( millis()-dS > 250 && dot == 0){
      digits[1]=digits[1]+10;
      dot=1;
    }
}

Notice you'll need these additional libraries installed using the Arduino IDE:

  • esp32 board manager by Espressif
  • WiFiMulti library
  • Adafruit_BMP280 library
    • (the rest of the libraries derive from these ones)

You should constantly try the performance of the code during prototyping so you can change any pin assignment in case of any malfunctioning. If it's all soldered and something fails, it will be really hard to find and solve the error, if it's from a connection.

Assembly

Soldering

Once the code is checked and the PCB is designed, you can start soldering with caution, as a wrong movement can damage your modules or produce errors in the program.

Case design

Having it all soldered you'll have a better idea of the final volume of the device. I measure it all with precision using a digital caliper to make a 3D model of it.

Blender 3D model of all components in their final position

This way you can now design a case around the model so you can craft it using a 3D printer. You could also use other types of assembling materials like plywood or metal.

I usually craft two pieces (one as a base and another one as a cover) so they can be screwed together. I also include different voids to allow for the connection of the USB cable, to add the program switch button and to add a rotary support to hold the clock in a slot in the television.

I will also use white PLA in one piece and grey in the other to bring more life to the design, ending it all like this:

And that's it. I hope you liked it and find it useful. Now you can get your own WIFI clock crafted! Any doubt about this clock can be dropped on Twitter! 🐦

🐦 @RoamingWorkshop

Installing UnityHub in Ubuntu 22

If you recently updated to Ubuntu 22 and tried to install UnityHub following the steps in their website:

https://docs.unity3d.com/hub/manual/InstallHub.html#install-hub-linux

Everything looks fine until you run the program and this happens:

>> unityhub
This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). The promise rejected with the reason:
ConnectionLost: Timeout! Cannot connect to Licensing Client within 60000ms
    at Function.fromType (/opt/unityhub/resources/app.asar/node_modules/@licensing/licensing-sdk/lib/core/ipc/licensingIpc.js:51:16)
    ...

Luckily, surfing the web you usually find the solution, and this one was in the same Unity forum:

https://forum.unity.com/threads/installing-unity-hub-on-ubuntu-22-04.1271816/#post-8136473

Let’s check the installation step by step:

Installing UnityHub on Linux

Following the official steps from their site (first link in the post):

  1. Add the Unity repository to your sources list:
    sudo sh -c 'echo "deb https://hub.unity3d.com/linux/repos/deb stable main" > /etc/apt/sources.list.d/unityhub.list'
  2. Add the public key to make it trustful:
    wget -qO - https://hub.unity3d.com/linux/keys/public | sudo apt-key add -
  3. Update your repositories:
    sudo apt update
  4. Install UnityHub:
    sudo apt-get install unityhub

It should all go fine, despite an error with some “chrome-sandbox” folder. But that’s not the error. Running unityhub from the terminal we have the above error.

Installing libssl1.1

The problem is that Ubuntu 22 uses a more recent version of libssl package, but we can still download the version used by Ubuntu 20.

  1. Access Ubuntu 20 packages site, where you find libssl1.1
    https://packages.ubuntu.com/focal/amd64/libssl1.1/download
  2. Right-click -> save as… over the link to the file starting with security.ubuntu.com/ubuntu… (or just click the link below; you’ll download a .deb instaler file)
    http://security.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2.16_amd64.deb
  3. Double-click the downloaded file and install the package.
  4. Now run unityhub in the terminal and done!

🐦 @RoamingWorkshop

Dall·E 2 Beta: image generation using artificial intelligence (AI)

I had this pending for a while, and by following some Tweets and the project GOIA by Iker Jiménez, it seemed that AI image generation had become really accessible. And it is. There’s enormous advance in this field. But it all has a price.

Image generation from OpenAI, one of Elon Musk giants, named after Dall·E, launched its open API for everyone to test.

It’s got 18$ credit to use during 3 months only by signing up. The rest needs to be paid, but you’ll have enough for making tests, and the it’s only 0.02$ per picture. Additionally it’s easy to use and the results are great.

It’s worth trying it to know what’s the top and at the end I will show you what we, the rest, of the mortals can use daily.

But first… let’s start with a picture of Teide volcano in Tenerife. Is it real or virtual?

Teide picture, Tenerife, created with Dall·E 2 by OpenAI. The Roaming Workshop 2022.

OpenAI API

Ok, very quickly, OpenAI is the Artifitial Intelligence (AI) megaproject from Elon Musk & Co. Within the numerous capabilities of this neuronal networks, we can generate images from natural language text, but there's much more.

IA will make our life easier in the future, so have a look at all the examples that are open during Beta testing:

https://beta.openai.com/examples/

Basically, a computer is "trained" with real, well featured, examples so, from them, the computer generates new content to satisfy a request.

The computer will not generate exactly what you want or think, something expected, but it will generate it's own result from your request and what it has been learning during training.

Image generation from natural language might the the most graphical application, but the potential is unimaginable. Up there I just asked Dall·E for the word "Teide". But, what if we think about things that have not happened, that we have not seen, or simple imaginations? Well, AI is able to bring to life your thoughts. Whatever you can imagine is shown on screen.

Now, let's see how to use it.

Dall·E 2 API

To "sell us" the future, OpenAI makes it very easy. We'll find plenty documentation to spend hours in a Beta trial version completely open for three months, and you'll only need an email address.

Sign up to use Dall·E 2 from their web site, pressing Sign Up button.

https://openai.com/dall-e-2/

You'll have to verify your email address and then log into your account. Be careful because you'll be redirected to the commercial site https://labs.openai.com

The trial site is this one:

https://beta.openai.com

Create an API key

From the top menu, click on your profile and select View API keys.

If you've just registered, you'll have to generate a new secret, then copy it somewhere safe, as you'll need it to use the API commands.

Using Dall·E 2

That's it. No more requirements. Let's start playing!

Let's see how to generate and image according to the documentation:

https://beta.openai.com/docs/api-reference/images/create

To keep it simple we can use curl, so you just need to open up a terminal, be it in Window, MacOS or Linux. The code indicated is the following:

curl https://api.openai.com/v1/images/generations \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer YOUR_API_KEY' \
  -d '{
  "prompt": "A cute baby sea otter",
  "n": 2,
  "size": "1024x1024"
}'

Here we need to type our secret key in place of YOUR_API_KEY.

Also write a description for the image you want inside prompt.

With n we define the number of images generated by this request.

And size is the picture size, allowing 256x256, 512x512, o 1024x1024.

I'm going to try with "a map of Mars in paper".

curl https://api.openai.com/v1/images/generations -H "Content-Type: application/json" -H "Authorization: Bearer sk-TuApiKeyAqui" -d "{\"prompt\":\"A map of Mars in paper\",\"n\":1,\"size\":\"1024x1024\"}"

TIP! Copy+paste this code in your terminal, replacing your secret key "sk-..." and the prompt.

You'll get back an URL as a response to your request, which is a web link to the generated image.

Open the link to see the result:

"A map of Mars in paper" with Dall·E 2. The Roaming WorkShop 2022.

Amazing!

Pricing

Well, well... let's get back to Earth. You wouldn't think this speed and quality would be free, would you? Go back to your OpenAI account where you can see the use that you make fo the API and how you spend your credit.

https://beta.openai.com/account/usage

As I was saying earlier, the Beta offers 18$ to spend during 3 months and every picture in 1024px is about 0,065$ (0,002$ for lowest quality).

All the main AI platforms similar to OpenAI (Midjourney, Nightcafe, DreamAI, etc) work this way, offering some credit for use, as it is the powerful performance of their servers what is being traded.

Alternatives (free ones)

There are various open-source and totally free alternatives. I invite you to try them all and choose the one you like the most, but I must warn you that there are many software and hardware requisites. You specially need a good graphic card (or several of them). In the end, you need to put in the balance how much you'll use the AI, and if it's not worth spending a couple cents for a couple pictures every now and then.

From the 4 recommendations below I have successfully tested the last two, the least powerful of them:

1. Pixray

https://github.com/pixray/pixray

Looks promising for its simple installation and use. Don't trust the picture above (it's their pixelated module) because it has plenty of complex options for very detailed image generation.

There is also plenty documentation made by users and support via Discord.

On the other hand, they recommend about 16GB of VRAM (virtual RAM from the GPU of your graphic card). I crashed for insufficient memory without seeing the results...

2. Dalle-Flow

https://github.com/jina-ai/dalle-flow#Client

Very technical and complex. The results look brilliant, but I couldn't achieve installation or web use. It uses several specific python modules that supposedly run on Google Colab. Or it's discontinued and it's currently broken, or the documentation is poor, or I'm a completely useless on this... Additionally the recommend about 21GB of VRAM to run standalone, although it could be shared using Colab... I could never check.

3. Craiyon

https://www.craiyon.com/

The former Dalle-mini created by Boris Dayma has a pactical web version totally free (no credit or payments, only a few ads while loading).

Although results aren't brilliant from scratch, we can improve them using Upscayl (I'll tell you more about it later).

4. Dalle-playground

https://github.com/saharmor/dalle-playground/

One of the many repositories derived from dalle-mini, in this case comes in a handy package that we can use freely with no cost in our home PC having very little hardware and software requirements. It runs as a local webapp in your browser as it generates a server that you can access anywhere in your network.

Together with Upscayl, they make good tandem to generate AI images in your own PC for free.

In the next post we'll see how to generate this images in our ordinary PC, with Dalle-playground y Upscayl.

That's all for now! I wait your doubts or comments about this Dall·E post on 🐦 Twitter!

🐦 @RoamingWorkshop

Three.js: a 3D model viewer for your site

I was writing a post where I wanted to insert a 3D model to picture it better, and I even thought in doing a viewer myself. But I didn’t have to browse long to find Three.js.

https://threejs.org/

It’s all invented!

For this example, I'll use the official CDN libraries, instead of downloading the files to a server.

Create a basic .html file as suggested in the documentation:

https://threejs.org/docs/index.html#manual/en/introduction/Creating-a-scene

<!DOCTYPE html>
<html>
	<head>
		<meta charset="utf-8">
		<title>My first three.js app</title>
		<style>
			body { margin: 0; }
		</style>
	</head>
	<body>
        <script async src="https://unpkg.com/[email protected]/dist/es-module-shims.js"></script>

        <script type="importmap">
          {
            "imports": {
              "three": "https://unpkg.com/[email protected]/build/three.module.js"
            }
          }
        </script>

        <script>
        //App code goes here
        </script>
	</body>
</html>

Create a scene

Let's keep with the example and fill up the second <script> block defining a scene with an animated rotating cube:

<script type="module">
        import * as THREE from 'three';
	const scene = new THREE.Scene();
	const camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 );

	const renderer = new THREE.WebGLRenderer();
	renderer.setSize( window.innerWidth, window.innerHeight );
	document.body.appendChild( renderer.domElement );

	const geometry = new THREE.BoxGeometry( 1, 1, 1 );
	const material = new THREE.MeshBasicMaterial( { color: 0x00ff00 } );
	const cube = new THREE.Mesh( geometry, material );
	scene.add( cube );

	camera.position.z = 5;

	function animate() {
		requestAnimationFrame( animate );

		cube.rotation.x += 0.01;
		cube.rotation.y += 0.01;

		renderer.render( scene, camera );
	};

	animate();
</script>

All of this would look like this:

Add drag controls and background

Now we have a base to work with. We can add some functionality inserting the OrbitControls module.

//Import new modules at the beginning of the script
import { OrbitControls } from 'https://unpkg.com/[email protected]/examples/jsm/controls/OrbitControls.js';

//then add the mouse controls after declaring the camera and renderer
const controls = new OrbitControls( camera, renderer.domElement );

Also, you can modify the background easily, but you will need to host your images within your app in a server, or run it locally, because of CORS. I will be using the background image of the blog header, which was taken from Stellarium.

First define a texture. Then, add it to the scene:

//do this before rendering, while defining the scene
//define texture
const texture = new THREE.TextureLoader().load( "https://theroamingworkshop.cloud/demos/Unity1-north.png" );

//add texture to scene
scene.background=texture;

Full code:

<!DOCTYPE html>
<html>
	<head>
		<meta charset="utf-8">
		<title>My first three.js app</title>
		<style>
			body { margin: 0; }
		</style>
	</head>
	<body>
        <script async src="https://unpkg.com/[email protected]/dist/es-module-shims.js"></script>

        <script type="importmap">
          {
            "imports": {
              "three": "https://unpkg.com/[email protected]/build/three.module.js"
            }
          }
        </script>
<body style="margin: 0; width:100%;height:300px;">
        <script async src="https://unpkg.com/[email protected]/dist/es-module-shims.js"></script>

        <script type="importmap">
          {
            "imports": {
              "three": "https://unpkg.com/[email protected]/build/three.module.js"
            }
          }
        </script>

    <script type="module">

    import * as THREE from 'three';
	import { OrbitControls } from 'https://unpkg.com/[email protected]/examples/jsm/controls/OrbitControls.js';

	const scene = new THREE.Scene();
    
	const texture = new THREE.TextureLoader().load( "https://theroamingworkshop.cloud/demos/Unity1-north.png" );
	scene.background=texture;

    const camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 );

	const renderer = new THREE.WebGLRenderer();
	renderer.setSize( window.innerWidth, window.innerHeight );
	document.body.appendChild( renderer.domElement );
    
    const controls = new OrbitControls( camera, renderer.domElement );

	const geometry = new THREE.BoxGeometry( 1, 1, 1 );
	const material = new THREE.MeshBasicMaterial( { color: 0x00ff00 } );
	const cube = new THREE.Mesh( geometry, material );
	scene.add( cube );

	camera.position.z = 5;

	function animate() {
		requestAnimationFrame( animate );

		cube.rotation.x += 0.01;
		cube.rotation.y += 0.01;

		renderer.render( scene, camera );
	};

	animate();
</script>
</body>
</html>

Insert a 3D model

Now let's replace the cube for our own 3D model which, in the case of Three.js, will be a glTF (.GLB o .GLTF) format, that is most supported and renders faster (.fbx, .stl, .obj and so on are also supported).

I will export a .glb of this basic Raspberry Pi 4B case that I did some time ago using Blender:

Now, replace the <script> block based on the "webgl_loader_gltf" which was shown at the start of the post:

<script type="module">
import * as THREE from 'three';
import { GLTFLoader } from 'https://unpkg.com/[email protected]/examples/jsm/loaders/GLTFLoader.js';
import { OrbitControls } from 'https://unpkg.com/[email protected]/examples/jsm/controls/OrbitControls.js';

let camera, scene, renderer;

init();
render();

function init() {

	const container = document.createElement( 'div' );
	document.body.appendChild( container );

	camera = new THREE.PerspectiveCamera( 30, window.innerWidth / window.innerHeight, 0.1, 20 );
    camera.position.set( 0.2, 0.2, 0.2 );

	scene = new THREE.Scene();        
    scene.add( new THREE.AmbientLight( 0xffffff, 0.75 ) );

	const dirLight = new THREE.DirectionalLight( 0xffffff, 1 );
	dirLight.position.set( 5, 10, 7.5 );
	dirLight.castShadow = true;
	dirLight.shadow.camera.right = 2;
	dirLight.shadow.camera.left = - 2;
	dirLight.shadow.camera.top	= 2;
	dirLight.shadow.camera.bottom = - 2;
	dirLight.shadow.mapSize.width = 1024;
	dirLight.shadow.mapSize.height = 1024;
	scene.add( dirLight );

    //model
     const loader = new GLTFLoader();
	 loader.load( 'https://theroamingworkshop.cloud/threeJS/models/rPi4case/rPi4_case_v1.glb', function ( gltf ) {
		scene.add( gltf.scene );
		render();
	 } );

	renderer = new THREE.WebGLRenderer( { antialias: true } );
            
	renderer.setPixelRatio( window.devicePixelRatio );
	renderer.setSize( window.innerWidth, window.innerHeight );
	renderer.toneMapping = THREE.ACESFilmicToneMapping;
	renderer.toneMappingExposure = 1;
	renderer.outputEncoding = THREE.sRGBEncoding;
	container.appendChild( renderer.domElement );

	const controls = new OrbitControls( camera, renderer.domElement );
	controls.addEventListener( 'change', render );
    controls.minDistance = 0.001;
	controls.maxDistance = 1;
	controls.target.set( 0.03, 0.01, -0.01 );
	controls.update();
	window.addEventListener( 'resize', onWindowResize );
}
function onWindowResize() {
	camera.aspect = window.innerWidth / window.innerHeight;
	camera.updateProjectionMatrix();
	renderer.setSize( window.innerWidth, window.innerHeight );
	render();
}
function render() {
	renderer.render( scene, camera );
}
</script>

Basically it does the following:

  • Import used modules:
    • GLTFLoader will load our model in .glb format
    • OrbitControls for camera rotation and position
  • Define the scene:
    • define a camera
    • define the light (in this case, ambient and directional; try commenting each of them to see the difference)
    • define a background
  • Load the model in the scene
  • Define rendering parameter and render.

All of it finally looks like this (click and drag!):

Hope it's useful! Doubts or comments on 🐦 Twitter!

🐦 @RoamingWorkshop

😃 Use an Emoji as a favicon for your web-apps

Seen on: https://css-tricks.com/emoji-as-a-favicon/

Every web needs its favicon: that little icon next to your page title.

Normally, your favicon would be a small image file that you link in your .html document, like in this blog (although here it is set by WordPress).

<link rel="icon" href="https://theroamingworkshop.cloud/b/wp-content/uploads/2022/08/cropped-TRW-favicon-1-32x32.png" sizes="32x32">

But you don’t need to waste time designing an image for every single web app of personal use, neither just leaving the standard browser ugly icon.

You can use an Emoji, which will give your page a personal touch, just as Lea Verou indicated in this tweet some time ago:

Summing up

Basically, all you have to do is copy-paste the following:

  1. Paste this code inside your <html>
<link rel="icon" href="data:image/svg+xml,<svg xmlns=%22http://www.w3.org/2000/svg%22 viewBox=%220 0 100 100%22><text y=%22.9em%22 font-size=%2290%22>⚡</text></svg>">
  1. Replace the icon “⚡” with any of the HTML symbols, like the ones you find in w3schools (just select it and copy it):

https://www.w3schools.com/charsets/ref_emoji.asp

  1. In case you see strange symbols rather than an icon, add UTF-8 enconding to your document with:
    <meta charset="UTF-8">

Sample code

Here I leave a sample code so you can try it in a .html file in your computer.

<html>

<meta charset="UTF-8">

<title>My web app</title>

<link rel="icon" href="data:image/svg+xml,<svg xmlns=%22http://www.w3.org/2000/svg%22 viewBox=%220 0 100 100%22><text y=%22.9em%22 font-size=%2290%22>⚡</text></svg>">

</html>

Find and share this and more web design tricks on 🐦 Twitter!

🐦 @RoamingWorkshop

« Older posts Newer posts »