tech explorers, welcome!

Author: TRW (Page 1 of 4)

PS5 Controller hanger

Simplicity at its best.

Today I’m sharing a very simple yet effective and elegant PS5 controller hanger.

Some concept/design images:

It just stands on its own without any screwing or glueing. It hangs from the PS5 itself and it’s expandable by design (probably you can fit up to 4 controllers just on one side).

Some real photos:

And finally, the .stl ready-to-print files:

https://theroamingworkshop.cloud/demos/PS5-DualSense-Holder-A_v1.stl

https://theroamingworkshop.cloud/demos/PS5-DualSense-Holder-B_v1.stl

🎅 Merry Christmas! 🎁

Also available on Cults3D for free download 💌

Case + OLED display for Raspberry Pi with status panel

I was tempted to just switch cases from my Raspberry Pi 4 to the Raspberry Pi 5, but look at it…

  • 16×2 LCD display from an Arduino Starter Kit
  • C++ program with the WiringPi library
  • DIY case

I was really proud of it at the time and it was good learning back in 2020, but surely I could do better. Surely I could do a cool status panel!!

Components

  • Raspberry Pi (probably compatible with any of them)
  • OLED display (RGB 1.5 inch size is ideal)
  • PCB prototype board
  • Jumper cables
  • Soldering iron
(courtesy of epiCRealism model in ComfyUI)
  • Casing (3D printed or handcrafted)

Software config

I'll configure the Raspberry to interact with this display using python, with the Adafruit Circuitpython library for the SSD1351 display controller:
https://learn.adafruit.com/adafruit-1-5-color-oled-breakout-board/python-wiring-and-setup#setup-3042417

It's as simple as installing Circuitpython with the following command:

sudo pip3 install --upgrade click setuptools adafruit-python-shell build adafruit-circuitpython-rgb-display

If you find issues with your python version not finding a compatible circuit python version, include --break-system-packages at the end. (It wont break anything today, but don't get used to it...)

sudo pip3 install --upgrade click setuptools adafruit-python-shell build adafruit-circuitpython-rgb-display --break-system-packages

Wiring

Now wire your display according to the manufacturer guidance. Mine is this one from BuyDisplay:

https://www.buydisplay.com/full-color-1-5-inch-arduino-raspberry-pi-oled-display-module-128x128

OLED DisplayRaspberry Pi (pin #)
GNDGND (20)
VCC3V3 (17)
SCLSPI0 SCLK (23)
SDASPI0 MOSI (19)
RESGPIO25 (22)
DCGPIO 24 (18)
CSSPI0 CE0 (24)

Use a site like pinout.xyz to find a suitable wiring configuration.

You're ready to do some tests before making your final move to the PCB.

Script config

You can try Adafruit's demo script. Just make sure that you choose the right display and update any changes to the ping assignment (use IO numbers/names rather than physical pin numbers):

# Configuration for CS and DC pins (adjust to your wiring):
cs_pin = digitalio.DigitalInOut(board.CE0)
dc_pin = digitalio.DigitalInOut(board.D24)
reset_pin = digitalio.DigitalInOut(board.D25)

# Setup SPI bus using hardware SPI:
spi = board.SPI()

disp = ssd1351.SSD1351(spi, rotation=270,                         # 1.5" SSD1351
    cs=cs_pin,
    dc=dc_pin,
    rst=reset_pin,
    baudrate=BAUDRATE
)

Assembly

Here are the STL 3D files for this case design:

Now let's put it all together:

  1. Screw the frame to the display
  2. Solder the 7 pins of the display to 7 jumper cables across the PCB
  3. Wire all connections to the Raspberry Pi
  4. Screw the top and bottom pieces together
  5. Place the display on the support

Final result

I've shared the script you see on the images via github:

https://github.com/TheRoam/RaspberryPi-SSD1351-OLED

It currently displays:

  • Time and date
  • System stats (OS, disk usage and CPU temperature)
  • Local weather from World Meteorological Organization (updated hourly)

And this is how it ends up looking. Much better right?

As always, enjoy your tinkering, and let me know any comments or issues on Twitter!

🐦 @RoamingWorkshop

Batman with a grappling gun hanging in your living-room

The truth is that resin printing is next level. Despite the complications of cleaning, the result is really incredible, and today I’m showing you my first print in the Anycubic Photon Mono X2: a Batman you can hang in your living-room with his grappling gun.

Models

The print that you see is made of two free models. That's why here's a huge thank to the authors and my reference to their work:

All I've done is to add an armature to the model giving it the desired pose, and then sticking the gun to its hand. So here is my model so you can print it just as shown.

Preview

Extras

To finish the figure, you can create a cape and a hook so it can be hanged.

Cape

For the cape I've cut out a piece of fabric from an old sportswear. This is common clothing that's usually black and with an adequate brightness and texture.

Start cutting a square piece and then give it some shape.

In the top part, wrap a wire that will let you adjust the cape around the figure's neck.

Hook

Just find some kind of hook or clamp that lets you tie a thin thread around it. I've used this small paper clamp that I can hook in the books of my shelf.

And that's how you get your awesome Batman hanging in your living-room. Hope you liked it, and any ideas or comments, drop them on Twitter!

🐦 @RoamingWorkshop

UNIHIKER-PAL: open-source python home assistant simplified

PAL is a simplified version of my python home assistant that I’m running in the DFRobot UNIHIKER which I’m releasing as free open-source.

This is just a demonstration for voice-recognition command-triggering simplicity using python and hopefully will serve as a guide for your own assistant.

Current version: v0.2.0 (updated september 2024)

Features

Current version includes the following:

  • Voice recognition: using open-source SpeechRecognition python library, returns an array of all the recognised audio strings.
  • Weather forecast: using World Meteorological Organization API data, provides today's weather and the forecast for the 3 coming days. Includes WMO weather icons.
  • Local temperature: reads local BMP-280 temperature sensor to provide a room temperature indicator.
  • IoT HTTP commands: basic workflow to control IoT smart home devices using HTTP commands. Currently turns ON and OFF a Shelly2.5 smart switch.
  • Power-save mode: controls brightness to lower power consumption.
  • Connection manager: regularly checks wifi and pings to the internet to restore connection when it's lost.
  • PAL voice samples: cloned voice of PAL from "The Mitchells vs. The Machines" using the AI voice model CoquiAI-TTS v2.
  • UNIHIKER buttons: button A enables a simple menu (this is thought to enable a more complex menu in the future).
  • Touchscreen controls: restore brightness (center), switch program (left) and close program (right), when touching different areas of the screen.

Installation

  1. Install dependencies :
    pip install SpeechRecognition pyyaml
  2. Download the github repo:
    https://github.com/TheRoam/UNIHIKER-PAL
  3. Upload the files and folders to the UNIHIKER in /root/upload/PAL/
  4. Configure the PAL_config.yaml WIFI credentials, IoT devices, theme, etc.
  5. Run the python script python /root/upload/PAL/PAL_v020.py from the Mind+ terminal or from the UNIHIKER touch interface.

If you enable Auto boot from the Service Toggle menu , the script will run every time the UNIHIKER is restarted.

https://www.unihiker.com/wiki/faq#Error:%20python3:%20can't%20open%20file…

Configuration

Version 0.2.0 includes configuration using a yaml file that is read when the program starts.

CREDENTIALS:
    ssid: "WIFI_SSID"
    pwd: "WIFI_PASSWORD"

DEVICES:
    light1:
        brand: "Shelly25"
        ip: "192.168.1.44"
        channel: 0

    light2:
        brand: "Shelly25"
        ip: "192.168.1.44"
        channel: 1

    light3:
        brand: "Shelly1"
        ip: "192.168.1.42"
        channel: 0

PAL:
    power_save_mode: 0
    temperature_sensor: 0
    wmo_city_id: "195"

Location

The variable "CityID" is used by the WMO API to provide more accurate weather forecast for your location. Define it with the parameter wmo_city_id

You can choose one of the available locations from the official WMO list:

https://worldweather.wmo.int/en/json/full_city_list.txt

IoT devices

At the moment, PAL v0.2.0 only includes functionality for Shelly2.5 for demonstration purposes.

Use variables lampBrand, lampChannel and lampIP to suit your Shelly2.5 configuration.

This is just as an example to show how different devices could be configured. These variables should be used to change the particularities of the HTTP command that is sent to different IoT devices.

More devices will be added in future releases, like Shelly1, ShellyDimmer, Sonoff D1, etc.

Power save mode

Power saving reduces the brightness of the device in order to reduce the power consumption of the UNIHIKER. This is done using the system command "brightness".

Change "ps_mode" variable to enable ("1") or disable ("0") the power-save mode.

Room temperature

Change "room_temp" variable to enable ("1") or disable ("0") the local temperature reading module. This requires a BMP-280 sensor to be installed using the I2C connector.

Check this other post for details on sensor installation:

https://theroamingworkshop.cloud/b/en/2490/

Other configurations in the source code:

Theme

Some theme configuration has been enabled by allowing to choose between different eyes as a background image.

Use the variables "eyesA" and "eyesB" specify one of the following values to change the background image expression of PAL:

  • "happy"
  • "angry"
  • "surprised"
  • "sad"

"eyesA" is used as the default background and "eyesB" will be used as a transition when voice recognition is activated and PAL is talking.

The default value for "eyesA" is "surprised" and it will change to "happy" when a command is recognized.

Customizable commands

Adding your own commands to PAL is simple using the "comandos" function.

Every audio recognized by SpeechRecognition is sent as a string to the "comandos" function, which then filters the content and triggers one or another matching command.

Just define all the possible strings that could be recognized to trigger your command (note that sometimes SpeechRecognition provides wrong or inaccurate transcriptions).

Then define the command that is triggered if the string is matched.

def comandos(msg):
    # LAMP ON
    if any(keyword in msg for keyword in ["turn on the lamp", "turn the lights on","turn the light on", "turn on the light", "turn on the lights"]):
        turnLAMP("on")
        os.system("aplay '/root/upload/PAL/mp3/Turn_ON_lights.wav'")

Activation keyword

You can customize the keywords or strings that will activate command functions. If any of the keywords in the list is recognized, the whole sentence is sent to the "comandos" function to find any specific command to be triggered.

For the case of PAL v0.2, these are the keywords that activate it (90% it's Paypal):

activate=[
    "hey pal",
    "hey PAL",
    "pal",
    "pall",
    "Pall",
    "hey Pall",
    "Paul",
    "hey Paul",
    "pol",
    "Pol",
    "hey Pol",
    "poll",
    "pause",
    "paypal",
    "PayPal",
    "hey paypal",
    "hey PayPal"
]

You can change this to any other sentence or name, so PAL is activated when you call it by these strings.

PAL voice

Use the sample audio file "PAL_full" below (also in the github repo in /mp3) as a reference audio for CoquiAI-TTS v2 voice cloning and produce your personalized voices:

https://huggingface.co/spaces/coqui/xtts

TIP!
You can check this other post for voice cloning with CoquiAI-XTTS:
https://theroamingworkshop.cloud/b/en/2425

Demo

Below are a few examples of queries and replies from PAL:

"Hey PAL, turn on the lights!"
"Hey PAL, turn the lights off"

Future releases (To-Do list)

I will be developing these features in my personal assistant, and will be updating the open-source release every now and then. Get in touch via github if you have special interest in any of them:

  • Advanced menu: allow configuration and manually triggering commands.
  • IoT devices: include all Shelly and Sonoff HTTP API commands.
  • Time query: requires cloning all number combinations...
  • Wikipedia/browser query: requires real-time voice generation...
  • Improved animations / themes.

Any thoughts, issues or improvements, I'll be happy to read them via github or Twitter!

🐦 @RoamingWorkshop

🌡UNIHIKER real-time temperature sensor set up in 2 minutes

I keep experimenting with the UNIHIKER board by DFRobot and it’s incredibly fast to make things work in it. Today I’ll show you how to set up on-screen real-time temperature display in two minutes using a BMP-280 module and zero programming.

Prerrequisites

Here's the trick. I was expecting you already had a few things working before starting the countdown:

  • Download and install Mind+, DFRobot's IDE for UNIHIKER.
    On Linux, it is a .deb file which does take a while to install:
    https://mindplus.cc/download-en.html
  • Solder a BMP-280 temperature and pressure module and connect it to the I2C cable. You might need to bend your pins slightly as the connector seems to be 1mm nano JST.

You're ready to go!

Set-up

  1. In Mind+, go to the Blocks editor and open the Extensions menu.
  2. Go to the pinpong tab and select the pinpong module (which enables interaction with the UNIHIKER pinout) and the BMP-280 module extension, for interaction with the temperature module.
  1. Go back to the Blocks editor and start building your code block. Just navigate through the different sections on the left hand side and drag all you need below the Python program start block:
    • pinpong - initialize board.
    • bmp280 - initialize module at standard address 0x76.
    • control - forever block (to introduce a while True loop).
    • unihiker - add objects to the display. I firstly add a filled rectangle object to clear previous text, then add a text object. Specify X,Y coordinates where every object will be displayed on the screen and its color.
    • bmp280 - read temperature property. Drag this inside the text field of the text object.
    • python - (optional) add a print to show the data on the terminal. I included all other sensor values.
    • control - add a wait object and wait for 1 second before next loop.
      All of it should look something like this (click to enlarge)

Launch

And that's all your program done, without any programming! Press RUN above and see how it loads and displays in your UNIHIKER screen. Touch the sensor with your finger to see how values change with the increase in temperature.

Wasn't that only 2 minutes? Let me know via Twitter ; )

🐦 @RoamingWorkshop

DFRobot case for UNIHIKER

The small and efficient form factor of the UNIHIKER makes it really easy to craft a case for it.

For my smart home asssistant I was looking for an android-like style, and the DFRobot logo is perfect for the UNIHIKER, making tribute to their developers.

Github Repo

I've released a github repository where I will be open-sourcing all the model files and people can contribute with their own, so feel free to create a pull request and share your designs!

https://github.com/TheRoam/DFRobot-UNIHIKER-case/

It includes a github page where models can be previewed:

https://theroam.github.io/DFRobot-UNIHIKER-case/

Unihiker_DFRcase_v1

This is my first release, used for testing and including all the basic features for my home assistant.

Files

https://github.com/TheRoam/DFRobot-UNIHIKER-case/tree/main/blender

Features

  • Top openings for text display through touch screen.
  • Side opening for USB-C connection.
  • Back opening for external sensor cabling.
  • Back extrusions for 40mm speaker placement.
  • Foot-like support for vertical standing.

Case parts

  1. Bottom piece acts as a casing.
  2. Internal support piece holds UNIHIKER board to the bottom piece using the screws on the board.
  3. Top piece acts as a cover and clips on bottom piece.
  4. Feet support enables vertical standing of the case.
  5. Antennas just to match the deisgn of the DFRobot logo.

Assembly

  1. Place the screws of the internal support piece and screw it to the UNIHIKER
  1. Place the UNIHIKER inside the bottom piece. If you're using external sensors, you can bring your cabling outside using the void at the back.
  1. Hold the board to the case using a pair of 2.5mm screws from the back of the bottom piece.
  1. Fit the top piece in place as it should just hold itself.
  1. Place the feet support and the antennas in place. You can glue these to make sure that they stay in place.

And that's your case crafted with a nice DFRobot android look!

Share your thoughts on Twitter!

🐦 @RoamingWorkshop

🐸Coqui-AI/TTS: ultra fast voice generation and cloning from multilingual text

A few months ago I brought TorToiSe-TTS repo, which made it easy to generate text-to-speech although it only worked with english models.

https://theroamingworkshop.cloud/b/en/2083/%f0%9f%90%a2tortoise-tts-ai-text-to-speech-generation/

But AI world is moving so fast that today I’m bringing an evolution that completely exceeds the previous post, with complex voice generation and cloning in a matter of seconds and multilingual: Coqui-AI TTS.

https://github.com/coqui-ai/TTS

Web version

If you're in a rush and don't want trouble, you can use the free huggingface space and get your cloned voice in a few seconds:

https://huggingface.co/spaces/coqui/xtts

  1. Write the text to be generated
  2. Select language
  3. Upload your reference file
  4. Configure the other options (tick the boxes: Cleanup Reference Voice, Do not use language auto-detect, Agree)
  5. Request cloning to the server (Send)

Installation

Another strength of Coqui-AI TTS is the almost instant installation:

  • You'll need python > 3.9, < 3.12.
  • RAM: not as much as for image generation. 4GB should be enough.
  • Create a project folder, for example "text-2-speech". Using a Linux terminal:
    mkdir text-2-speech
  • It's convenient to create a specific python environment to avoid package incompatibilities, so you need python3-venv. I'll create an environemtn called TTSenv:
    cd text-2-speech
    python3 -m venv TTSenv
  • Activate the environment in the terminal:
    source TTSenv/bin/activate
  • If you only need voice generation (without cloning or training), install TTS directly with python:
    pip install TTS
  • Otherwise, install the full repo from Coqui-AI TTS github:
    git clone https://github.com/coqui-ai/TTS
    cd TTS
    pip install -e .[all]

Checking language models and voices

First thing you can do is to check the available models to transform text into voice in different languages.

Type the following in your terminal:

tts --list_models

No API token found for 🐸Coqui Studio voices - https://coqui.ai
Visit 🔗https://app.coqui.ai/account to get one.
Set it as an environment variable `export COQUI_STUDIO_TOKEN=`


Name format: type/language/dataset/model
1: tts_models/multilingual/multi-dataset/xtts_v2 [already downloaded]
2: tts_models/multilingual/multi-dataset/xtts_v1.1 [already downloaded]
3: tts_models/multilingual/multi-dataset/your_tts
4: tts_models/multilingual/multi-dataset/bark [already downloaded]
5: tts_models/bg/cv/vits
6: tts_models/cs/cv/vits
7: tts_models/da/cv/vits
8: tts_models/et/cv/vits
9: tts_models/ga/cv/vits
10: tts_models/en/ek1/tacotron2
11: tts_models/en/ljspeech/tacotron2-DDC
12: tts_models/en/ljspeech/tacotron2-DDC_ph
13: tts_models/en/ljspeech/glow-tts
14: tts_models/en/ljspeech/speedy-speech
15: tts_models/en/ljspeech/tacotron2-DCA
16: tts_models/en/ljspeech/vits
17: tts_models/en/ljspeech/vits--neon
18: tts_models/en/ljspeech/fast_pitch
19: tts_models/en/ljspeech/overflow
20: tts_models/en/ljspeech/neural_hmm
21: tts_models/en/vctk/vits
22: tts_models/en/vctk/fast_pitch
23: tts_models/en/sam/tacotron-DDC
24: tts_models/en/blizzard2013/capacitron-t2-c50
25: tts_models/en/blizzard2013/capacitron-t2-c150_v2
26: tts_models/en/multi-dataset/tortoise-v2
27: tts_models/en/jenny/jenny
28: tts_models/es/mai/tacotron2-DDC [already downloaded]
29: tts_models/es/css10/vits [already downloaded]
30: tts_models/fr/mai/tacotron2-DDC
31: tts_models/fr/css10/vits
32: tts_models/uk/mai/glow-tts
33: tts_models/uk/mai/vits
34: tts_models/zh-CN/baker/tacotron2-DDC-GST
35: tts_models/nl/mai/tacotron2-DDC
36: tts_models/nl/css10/vits
37: tts_models/de/thorsten/tacotron2-DCA
38: tts_models/de/thorsten/vits
39: tts_models/de/thorsten/tacotron2-DDC
40: tts_models/de/css10/vits-neon
41: tts_models/ja/kokoro/tacotron2-DDC
42: tts_models/tr/common-voice/glow-tts
43: tts_models/it/mai_female/glow-tts
44: tts_models/it/mai_female/vits
45: tts_models/it/mai_male/glow-tts
46: tts_models/it/mai_male/vits
47: tts_models/ewe/openbible/vits
48: tts_models/hau/openbible/vits
49: tts_models/lin/openbible/vits
50: tts_models/tw_akuapem/openbible/vits
51: tts_models/tw_asante/openbible/vits
52: tts_models/yor/openbible/vits
53: tts_models/hu/css10/vits
54: tts_models/el/cv/vits
55: tts_models/fi/css10/vits
56: tts_models/hr/cv/vits
57: tts_models/lt/cv/vits
58: tts_models/lv/cv/vits
59: tts_models/mt/cv/vits
60: tts_models/pl/mai_female/vits
61: tts_models/pt/cv/vits
62: tts_models/ro/cv/vits
63: tts_models/sk/cv/vits
64: tts_models/sl/cv/vits
65: tts_models/sv/cv/vits
66: tts_models/ca/custom/vits
67: tts_models/fa/custom/glow-tts
68: tts_models/bn/custom/vits-male
69: tts_models/bn/custom/vits-female
70: tts_models/be/common-voice/glow-tts

Name format: type/language/dataset/model
1: vocoder_models/universal/libri-tts/wavegrad
2: vocoder_models/universal/libri-tts/fullband-melgan [already downloaded]
3: vocoder_models/en/ek1/wavegrad
4: vocoder_models/en/ljspeech/multiband-melgan
5: vocoder_models/en/ljspeech/hifigan_v2
6: vocoder_models/en/ljspeech/univnet
7: vocoder_models/en/blizzard2013/hifigan_v2
8: vocoder_models/en/vctk/hifigan_v2
9: vocoder_models/en/sam/hifigan_v2
10: vocoder_models/nl/mai/parallel-wavegan
11: vocoder_models/de/thorsten/wavegrad
12: vocoder_models/de/thorsten/fullband-melgan
13: vocoder_models/de/thorsten/hifigan_v1
14: vocoder_models/ja/kokoro/hifigan_v1
15: vocoder_models/uk/mai/multiband-melgan
16: vocoder_models/tr/common-voice/hifigan
17: vocoder_models/be/common-voice/hifigan
Name format: type/language/dataset/model
1: voice_conversion_models/multilingual/vctk/freevc24 [already downloaded]

Or filter the result with grep, for example to get spanish models:

tts --list_models | grep "/es"

28: tts_models/es/mai/tacotron2-DDC [already downloaded]
29: tts_models/es/css10/vits [already downloaded]

Text to speech

With all this you're ready to turn text into speech in a matter of seconds and in the language of your choice.

In the previous terminal, write the following, specifying the right model name:

tts --text "Ahora puedo hablar en español!" --model_name "tts_models/es/css10/vits" --out_path output/tts-es.wav

Make sure that the output folder exists, then check your result. The first time you'll get several files downloaded, and you'll have to accept Coqui-AI license. Next, voice generation only takes a few seconds:

Voice cloning

Lastly, the most amazing feature of this model is the voice cloning from only a few seconds of audio recording.

Like in the previous post, I took some 30 seconds of Ultron's voice from the film Avengers: Age of Ultron.

Sample in spanish:

Sample in english:

Now, let's prepare a python script to set all needed parameters, which will do the following:

  • Import torch and TTS
    import torch
    from TTS.api import TTS
  • Define memory device (cuda or cpu). Using cpu should be enough (cuda might probably crash).
    device="cpu"
  • Define text to be generated.
    txt="Voice generated from text"
  • Define the reference audio sample (a .wav file of about 30 seconds)
    sample="/voice-folder/voice.wav"
  • Call to TTS model
    tts1=TTS("model_name").to(device)
  • File creation
    tts1.tts_to_file(txt, speaker_wav=sample, language="es", file_path="output-folder/output-file.wav")

I called a script TRW-clone.py looking like this:

import torch
from TTS.api import TTS

# Get device ('cuda' or 'cpu')
device="cpu"

#Define text
txt="Bienvenido a este nuevo artículo del blog. Disfruta de tu visita."
#txt="Welcome to this new block post... Enjoy your visit!"

#Define audio sample
sample="../my-voices/ultron-es/mix.wav"
#sample="../my-voices/ultron-en/mix.wav"

#Run cloning
tts1 = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)

tts1.tts_to_file(txt, speaker_wav=sample, language="es", file_path="../output/ultron-es.wav")

Run it from the TTS folder where the repo was installed:

cd TTS
python3 TRW-clone.py

Results

Here I drop the results I got on my first tests.

Spanish:

English:

And with a couple of iterations you can get really amazing results.

Any doubts or comments you can still drop me a line on Twitter/X

🐦 @RoamingWorkshop

Shelly 1: wifi switch/scheduler, local HTTP API config and plug box assembly

Today I’m bringing you the second chance that I’ll give Shelly. My first Shelly Dimmer blew up for excess temperature inside a connection box, but another Shelly 2.5 controlling two lights is holding fine, also fitted in the wall.

Maybe the difference is the extra 5ºC that they withstand, so I’m going to fit a wall plug with switch and scheduler using a tiny Shelly 1, and just hope it survives.

Apart from the tiny size, Shelly are easy to configure, so we’ll also see how to control them locally via the HTTP API.

Requirements

My goal is to enable a wall plug that I can control and schedule via WIFI, in my case, to manage the electrical water heater. This is what I'll use:

  • Shelly 1.
  • Two wire electric cable (line and neutral).
  • Male plug.
  • Female plug socket.
  • Assembling material for a case (3D printer, plywood, etc).

Electric connection

Let's look at Shelly's user manual and see how we need to make the connections:

https://www.shelly.com/documents/user_guide/shelly_1_multi_language.pdf

The idea for this standard schematic is to connect Shelly 1 to a light bulb and its switch, where every symbol means the following:

  • L: line
  • N: neutral
  • SW: switch
  • I: input
  • O: output

As I want to enable a plug socket, the schematic will vary slightly, as I will not be using any switch and I can connect the input directly to the line. On the other hand, and for I reason I ignore, there is no cabling inside the connection box, so I bring the electric line from another plug using the cable... In the end, it all ends like this:

TIP! I'd say that I confused the cable color norm, but it is not important in this case as its a closed circuit and it will work anyways.

Assembly

You might see that I made a small 3D support to guide the cabling, as well as a lid to cover the void around the socket. Modelling every part in 3D, with real measures, helps to distribute the space properly and ensure that your solution fits:

I'll leave here the 3D .stl models ready to send to your slider software.

Shelly-Plug_support_v1.stl

https://theroamingworkshop.cloud/demos/Shelly-Plug_support_v1.stl

Shelly-Plug_tapa_v1.stl

https://theroamingworkshop.cloud/demos/Shelly-Plug_tapa_v1.stl

Finally, this is how it all looks crafted in place. It's not the perfect fit, but it does the job I needed.

Internet connection

Let's now see how to bring the Shelly 1 to life and control it locally.

Opposite to Sonoff, Shelly makes it much easier and you just need to follow the user manual.

  1. Power Shelly 1 using the male plug.
  2. This will activate an AP (Access Point) or Wi-Fi network with an SSID looking like "shelly1-01A3B4". Connect to this Wi-Fi network using a smartphone or PC.
  3. Once connected, use a web browser to access the IP at 192.168.33.1 and it will take you to Shelly's web interface for device configuration.
  1. Once in, you must config the device (inside Internet & Security menu) so that it automatically connects to your local Wi-Fi network, as well as it is recommended to restrict access with username and password.

We're all set to communicate with Shelly 1 locally.

Shelly HTTP API usage

To use the command of the HTTP API you must know the device IP in your local network.

Find IP in the router

You can access the network map in your router, usually from the address http://192.168.1.1

The address and the password should be in some sticker in your router. Then you'll see your device with a name like shelly1-XXXXXXXXXXXX:

Find IP using nmap

In a terminal you can use the tool nmap to scan your local network.

  • Download it if not done yet:
    sudo apt-get update
    sudo apt-get install nmap
  • Scan your network (using sudo you'll get the MAC address, which is useful as the IP could change when restarting the router)
    sudo nmap -v -sn 192.168.1.0/241.0/24

Send HTTP requests to the device

Shelly's HTTP API is well documented in their website:

https://shelly-api-docs.shelly.cloud/gen1/#common-http-api

In order to communicate with the device, you need to send HTTP requests using some software like Postman or using curl or wget in a terminal.

The request will be sent to the device IP with:

$ curl -X GET http://192.168.1.XX/command

If you defined user and password, you need to include them in the URL like below, or you'll receive a "401 unauthorized" response:

$ curl -X GET http://user:[email protected]/command

Now let's see some specific cases:

Device information

http://[user]:[pass]@[ip]/status

  • curl

curl -X GET 'http://user:[email protected]/status

  • Response
{"wifi_sta":{"connected":true,"ssid":"MYWIFINETWORK","ip":"192.168.1.XX","rssi":-70},"cloud":{"enabled":false,"connected":false},"mqtt":{"connected":false},"time":"19:30","unixtime":1699295403,"serial":1,"has_update":false,"mac":"A4CF12F407B1","cfg_changed_cnt":0,"actions_stats":{"skipped":0},"relays":[{"ison":false,"has_timer":false,"timer_started":0,"timer_duration":0,"timer_remaining":0,"source":"input"}],"meters":[{"power":0.00,"is_valid":true}],"inputs":[{"input":0,"event":"","event_cnt":0}],"ext_sensors":{},"ext_temperature":{},"ext_humidity":{},"update":{"status":"idle","has_update":false,"new_version":"20230913-112003/v1.14.0-gcb84623","old_version":"20230913-112003/v1.14.0-gcb84623"},"ram_total":51688,"ram_free":39164,"fs_size":233681,"fs_free":146333,"uptime":2679}

Turn (on/off)

http://[usr]:[pass]@[ip]/relay/0?turn=[on/off]

  • curl

curl -X GET http://user:[email protected]/relay/0?turn=on

  • Response
{"ison":true,"has_timer":false,"timer_started":0,"timer_duration":0,"timer_remaining":0,"source":"http"}

The value 0 in the URL matches te number of the relay or internal switch in the Shelly. In this case there is only one, but in the case of Shelly 2.5 you have two relays, so you can call them individually changing this value.

Scheduler

http://[usr]:[pass]@[ip]/settings/relay/0?schedule_rules=[HHMM]-[0123456]-[on/off]

  • curl

curl -X GET http://user:[email protected]/settings/relay/0?schedule_rules=1945-0123456-on

  • Response
{"name":"CALENTADOR","appliance_type":"General","ison":false,"has_timer":false,"default_state":"off","btn_type":"toggle","btn_reverse":0,"auto_on":0.00,"auto_off":0.00,"power":0.00,"schedule":true,"schedule_rules":["1945-0123456-on"]}

In this case, the URL defines the following schedule rule parameters:

  • HHMM: hour and minute that activate the rule
  • 0123456: days of the week when the rule is active
  • on/off: status that the rule triggers

This way, to schedule the on and off of the device (except during weekends), you could send a request like this one:

curl -X GET http://192.168.1.XX/settings/relay/0?schedule_rules=2300-01234-on,0700-01234-off

Obviously you can also configure the schedule rules from the web interface, or just check the commands worked:

And that would cover all of it. Jump off and fill your house with tiny Shellys completely customizable. Any questions or comments on Twitter 🐦 please! (though given what's going on with the X thing, who knows how long I'll last...)

☮️ #WarTraces : the aerial footprint of a world in war.

It all started as an exercise of curiosity, wondering if the attack of the Kerch bridge in Crimea was seen from space.

To my surprise, this might be one of the least visible attacks occurred during the war in Ukraine.

With every news, I checked the publicly available satellite imagery from the European Space Agency and found evidence of many of the reports. Satellite images are purely objective. There is no manipulation or speech behind.

This is absolutely opposite to taking one side or another. This is about showing the reality and the scale of war. The one and only truth is that violence and war has to be condemned in all its forms and prevented with every minimum effort one can do.

#WarTraces start in Ukraine, because it’s close to Europeans, 24/7 on TV, and affecting many powerful economies. But there are many ongoing conflicts in the world. Many that seem to be forgotten. Many we don’t seem to matter or care about. Many that have to be stopped as well as this one.

Follow me on 🐦 Twitter to know when this post is updated!

🐦 @RoamingWorkshop

PEACE ☮ BROTHERS

TIP! Click an image for full size.

Ukraine War: 24 Feb 2022 - today

📖 Wikipedia timeline.

Syrian Civil War: 2011 - today

📖 Wikipedia timeline.

Israeli-Palestinian Conficlt: 1948-today

📖 Wikipedia timeline.

endleZZ v0.2: anniversary update

One year ago endleZZ was conceived in a waiting room with no signal as a way to develop an offline game without any special requirements: just a browser and an .html file.

The idea is great and it’s been useful many times to kill some time, but it really deserved a minimum visual and functional update.

So here it is, version 0.2: anniversary update! 🎉

Web version

You can access the web version that I host in this server:

https://theroamingworkshop.cloud/endlezz

Local version

Or play it local (and offline) downloading its source code, or using the .html file from github:

https://github.com/TheRoam/endleZZ

Changelog

New features:

  • options menu
  • time controls: pause, restore y exit
  • new map elements (trees and rocks) that spawn randomly in each game
  • initial weather system trials: added random cloud generation

Bug fixes:

  • bullets now reach the end of the map, despite where you click
  • adjusted time calculation for pause/replay
  • overall performance and interaction improvements
« Older posts