tech explorers, welcome!

Category: Demos

Demos

PS5 Controller hanger

Simplicity at its best.

Today I’m sharing a very simple yet effective and elegant PS5 controller hanger.

Some concept/design images:

It just stands on its own without any screwing or glueing. It hangs from the PS5 itself and it’s expandable by design (probably you can fit up to 4 controllers just on one side).

Some real photos:

And finally, the .stl ready-to-print files:

https://theroamingworkshop.cloud/demos/PS5-DualSense-Holder-A_v1.stl

https://theroamingworkshop.cloud/demos/PS5-DualSense-Holder-B_v1.stl

🎅 Merry Christmas! 🎁

Also available on Cults3D for free download 💌

Batman with a grappling gun hanging in your living-room

The truth is that resin printing is next level. Despite the complications of cleaning, the result is really incredible, and today I’m showing you my first print in the Anycubic Photon Mono X2: a Batman you can hang in your living-room with his grappling gun.

Models

The print that you see is made of two free models. That's why here's a huge thank to the authors and my reference to their work:

All I've done is to add an armature to the model giving it the desired pose, and then sticking the gun to its hand. So here is my model so you can print it just as shown.

Preview

Extras

To finish the figure, you can create a cape and a hook so it can be hanged.

Cape

For the cape I've cut out a piece of fabric from an old sportswear. This is common clothing that's usually black and with an adequate brightness and texture.

Start cutting a square piece and then give it some shape.

In the top part, wrap a wire that will let you adjust the cape around the figure's neck.

Hook

Just find some kind of hook or clamp that lets you tie a thin thread around it. I've used this small paper clamp that I can hook in the books of my shelf.

And that's how you get your awesome Batman hanging in your living-room. Hope you liked it, and any ideas or comments, drop them on Twitter!

🐦 @RoamingWorkshop

UNIHIKER-PAL: open-source python home assistant simplified

PAL is a simplified version of my python home assistant that I’m running in the DFRobot UNIHIKER which I’m releasing as free open-source.

This is just a demonstration for voice-recognition command-triggering simplicity using python and hopefully will serve as a guide for your own assistant.

Current version: v0.2.0 (updated september 2024)

Features

Current version includes the following:

  • Voice recognition: using open-source SpeechRecognition python library, returns an array of all the recognised audio strings.
  • Weather forecast: using World Meteorological Organization API data, provides today's weather and the forecast for the 3 coming days. Includes WMO weather icons.
  • Local temperature: reads local BMP-280 temperature sensor to provide a room temperature indicator.
  • IoT HTTP commands: basic workflow to control IoT smart home devices using HTTP commands. Currently turns ON and OFF a Shelly2.5 smart switch.
  • Power-save mode: controls brightness to lower power consumption.
  • Connection manager: regularly checks wifi and pings to the internet to restore connection when it's lost.
  • PAL voice samples: cloned voice of PAL from "The Mitchells vs. The Machines" using the AI voice model CoquiAI-TTS v2.
  • UNIHIKER buttons: button A enables a simple menu (this is thought to enable a more complex menu in the future).
  • Touchscreen controls: restore brightness (center), switch program (left) and close program (right), when touching different areas of the screen.

Installation

  1. Install dependencies :
    pip install SpeechRecognition pyyaml
  2. Download the github repo:
    https://github.com/TheRoam/UNIHIKER-PAL
  3. Upload the files and folders to the UNIHIKER in /root/upload/PAL/
  4. Configure the PAL_config.yaml WIFI credentials, IoT devices, theme, etc.
  5. Run the python script python /root/upload/PAL/PAL_v020.py from the Mind+ terminal or from the UNIHIKER touch interface.

If you enable Auto boot from the Service Toggle menu , the script will run every time the UNIHIKER is restarted.

https://www.unihiker.com/wiki/faq#Error:%20python3:%20can't%20open%20file…

Configuration

Version 0.2.0 includes configuration using a yaml file that is read when the program starts.

CREDENTIALS:
    ssid: "WIFI_SSID"
    pwd: "WIFI_PASSWORD"

DEVICES:
    light1:
        brand: "Shelly25"
        ip: "192.168.1.44"
        channel: 0

    light2:
        brand: "Shelly25"
        ip: "192.168.1.44"
        channel: 1

    light3:
        brand: "Shelly1"
        ip: "192.168.1.42"
        channel: 0

PAL:
    power_save_mode: 0
    temperature_sensor: 0
    wmo_city_id: "195"

Location

The variable "CityID" is used by the WMO API to provide more accurate weather forecast for your location. Define it with the parameter wmo_city_id

You can choose one of the available locations from the official WMO list:

https://worldweather.wmo.int/en/json/full_city_list.txt

IoT devices

At the moment, PAL v0.2.0 only includes functionality for Shelly2.5 for demonstration purposes.

Use variables lampBrand, lampChannel and lampIP to suit your Shelly2.5 configuration.

This is just as an example to show how different devices could be configured. These variables should be used to change the particularities of the HTTP command that is sent to different IoT devices.

More devices will be added in future releases, like Shelly1, ShellyDimmer, Sonoff D1, etc.

Power save mode

Power saving reduces the brightness of the device in order to reduce the power consumption of the UNIHIKER. This is done using the system command "brightness".

Change "ps_mode" variable to enable ("1") or disable ("0") the power-save mode.

Room temperature

Change "room_temp" variable to enable ("1") or disable ("0") the local temperature reading module. This requires a BMP-280 sensor to be installed using the I2C connector.

Check this other post for details on sensor installation:

https://theroamingworkshop.cloud/b/en/2490/

Other configurations in the source code:

Theme

Some theme configuration has been enabled by allowing to choose between different eyes as a background image.

Use the variables "eyesA" and "eyesB" specify one of the following values to change the background image expression of PAL:

  • "happy"
  • "angry"
  • "surprised"
  • "sad"

"eyesA" is used as the default background and "eyesB" will be used as a transition when voice recognition is activated and PAL is talking.

The default value for "eyesA" is "surprised" and it will change to "happy" when a command is recognized.

Customizable commands

Adding your own commands to PAL is simple using the "comandos" function.

Every audio recognized by SpeechRecognition is sent as a string to the "comandos" function, which then filters the content and triggers one or another matching command.

Just define all the possible strings that could be recognized to trigger your command (note that sometimes SpeechRecognition provides wrong or inaccurate transcriptions).

Then define the command that is triggered if the string is matched.

def comandos(msg):
    # LAMP ON
    if any(keyword in msg for keyword in ["turn on the lamp", "turn the lights on","turn the light on", "turn on the light", "turn on the lights"]):
        turnLAMP("on")
        os.system("aplay '/root/upload/PAL/mp3/Turn_ON_lights.wav'")

Activation keyword

You can customize the keywords or strings that will activate command functions. If any of the keywords in the list is recognized, the whole sentence is sent to the "comandos" function to find any specific command to be triggered.

For the case of PAL v0.2, these are the keywords that activate it (90% it's Paypal):

activate=[
    "hey pal",
    "hey PAL",
    "pal",
    "pall",
    "Pall",
    "hey Pall",
    "Paul",
    "hey Paul",
    "pol",
    "Pol",
    "hey Pol",
    "poll",
    "pause",
    "paypal",
    "PayPal",
    "hey paypal",
    "hey PayPal"
]

You can change this to any other sentence or name, so PAL is activated when you call it by these strings.

PAL voice

Use the sample audio file "PAL_full" below (also in the github repo in /mp3) as a reference audio for CoquiAI-TTS v2 voice cloning and produce your personalized voices:

https://huggingface.co/spaces/coqui/xtts

TIP!
You can check this other post for voice cloning with CoquiAI-XTTS:
https://theroamingworkshop.cloud/b/en/2425

Demo

Below are a few examples of queries and replies from PAL:

"Hey PAL, turn on the lights!"
"Hey PAL, turn the lights off"

Future releases (To-Do list)

I will be developing these features in my personal assistant, and will be updating the open-source release every now and then. Get in touch via github if you have special interest in any of them:

  • Advanced menu: allow configuration and manually triggering commands.
  • IoT devices: include all Shelly and Sonoff HTTP API commands.
  • Time query: requires cloning all number combinations...
  • Wikipedia/browser query: requires real-time voice generation...
  • Improved animations / themes.

Any thoughts, issues or improvements, I'll be happy to read them via github or Twitter!

🐦 @RoamingWorkshop

The impact of a forest wildfire in 3D with geographical information on Cesium JS

After a few posts about the use of satellite information, let’s see how to bring it (almost) all together in a practical and impacting example. And it’s given the latest wildfires occurred in Spain that they’ve called my attention as I could not imagine how brutal they have been (although nothing compared to the ones in Chile, Australia or USA). But let’s do the exercise without spending too much GB in geographical information.

What I want is to show the extent of the fire occurred in Asturias in March, but also to show the impact removing the trees affected by the flames. Let’s do it!

Download data

I will need a Digital Surface Model (which includes trees and structures), an orthophoto taken during the fire, and a Digital Terrain Model (which has been taken away trees and structures) to replace the areas affected by the fire.

1. Terrain models

I'm going to use the great models from Spanish IGN, downloading the products MDS5 and MDT5 for the area.

http://centrodedescargas.cnig.es/CentroDescargas/index.jsp

2. Orthophotos

As per satellite imagery, I decided to go for Landsat this time, as it had a clearer view during the last days of the fire.

I will be using the images taken on 17th February 2023 (before the fire) and 6th April 2023 (yet in the last days).

https://search.earthdata.nasa.gov

Process satellite imagery

We'll use the process i.group from GRASS in QGIS to group the different bands captured by the satellite in a single RGB raster, as we saw in this previous post:

https://theroamingworkshop.cloud/b/en/1761/process-satellite-imagery-from-landsat-or-sentinel-2-with-qgis/

You'll have to do it for every region downloaded, four in my case, that will be joint later using Build virtual raster

1. True color image (TCI)

Combine bands 4, 3, 2.

2. False color image

Combine bands 5, 4, 3.

3. Color adjustment

To get a better result, you can adjust the minimum and maximum values considered in each band composing the image. These values are found in the Histogram inside the layer properties.

here you have the values I used for the pictures above::

BandTCI minTCI maxFC minFC max
1 Red-1001500-504000
2 Green01500-1002000
3 Blue-10120001200

Fire extent

As you can see, the false color image shows a clear extension of the fire. We'll generate a polygon covering the extent of the fire with it.

First, let's query the values in Band 1 (red) which offers a good contrast for values in the area of the fire. They are in the range 300-1300.

Using the process Reclassify by table, we'll assign value 1 to the cells inside this range, and value 0 to the rest.

Vectorize the result with process Poligonize and, following the satellite imagery, select those polygons in the fire area.

Use the tool Dissolve to merge all the polygons in one element; then Smooth to round the corners slightly.

Let's now get the inverse. Extract by extent using the Landsat layer, then use Difference with the fire polygon.

Process terrain

1. Combine terrain data

Join the same type model files in one (one for the DSM, and another one for the DTM). Use process Build virtual raster

2. Extract terrain data

Extract the data of interest in each model:

  • From the DSM extract the surface affected by the fire, so you'll remove all the surface in it.
  • Do the opposite with the DTM: keep the terrain (without trees) in the fire area, so it will fill the gaps in the other model.

Use the process Crop raster by mask layer using the layers generated previously.

Finally join both raster layers, so they fill each others' gaps, using Build virtual raster.

Bring it to life with Cesium JS

You should already have a surface model without trees in the fire area, but let's try to explore it in an interactive way.

I showed a similar example, using a custom Digital Terrain Model, as well as a recent satellite image, for the Tajogaite volcano in La Palma:

https://theroamingworkshop.cloud/b/1319/cesiumjs-el-visor-gratuito-de-mapas-en-3d-para-tu-web/

In this case I'll use Cesium JS again to interact easily with the map (follow the post linked above to see how to upload your custom files to the Cesium JS viewer).

For this purpose, I created a split-screen viewer (using two separate instances of Cesium JS) to show the before and after picture of the fire. Here you have a preview:

https://theroamingworkshop.cloud/demos/cesiumJSmirror/

I hope you liked it! here you have the full code and a link to github, where you can download it directly. And remember, share your doubts or comments on twitter!

🐦 @RoamingWorkshop

<html lang="en">
<head>
<meta charset="utf-8">
<title>Cesium JS mirror v1.0</title>
<script src="https://cesium.com/downloads/cesiumjs/releases/1.96/Build/Cesium/Cesium.js"></script>
<link href="https://cesium.com/downloads/cesiumjs/releases/1.96/Build/Cesium/Widgets/widgets.css" rel="stylesheet">
</head>
<body style="margin:0;width:100vw;height:100vh;display:flex;flex-direction:row;font-family:Arial">
<div style="height:100%;width:50%;" id="cesiumContainer1">
	<span style="display:block;position:absolute;z-Index:1001;top:0;background-color: rgba(0, 0, 0, 0.5);color:darkorange;padding:13px">06/04/2023</span>
</div>
<div style="height:100%;width:50%;background-color:black;" id="cesiumContainer2">
	<span style="display:block;position:absolute;z-Index:1001;top:0;background-color: rgba(0, 0, 0, 0.5);color:darkorange;padding:13px">17/02/2023</span>
	<span style="display:block;position:absolute;z-Index:1001;bottom:10%;right:0;background-color: rgba(0, 0, 0, 0.5);color:white;padding:13px;font-size:14px;user-select:none;">
		<b><u>Cesium JS mirror v1.0</u></b><br>
		· Use the <b>left panel</b> to control the camera<br>
		· <b>Click+drag</b> to move the position<br>
		· <b>Control+drag</b> to rotate camera<br>
		· <b>Scroll</b> to zoom in/out<br>
		<span><a style="color:darkorange" href="https://theroamingworkshop.cloud" target="_blank">© The Roaming Workshop <span id="y"></span></a></span>
    </span>
</div>
<script>
	// INSERT ACCESS TOKEN
    // Your access token can be found at: https://cesium.com/ion/tokens.
    // Replace `your_access_token` with your Cesium ion access token.

    Cesium.Ion.defaultAccessToken = 'your_access_token';

	// Invoke LEFT view
    // Initialize the Cesium Viewer in the HTML element with the `cesiumContainer` ID.

    const viewerL = new Cesium.Viewer('cesiumContainer1', {
		terrainProvider: new Cesium.CesiumTerrainProvider({
			url: Cesium.IonResource.fromAssetId(1640615),//get your asset ID from "My Assets" menu
		}),	  
		baseLayerPicker: false,
		infoBox: false,
    });    

	// Add Landsat imagery
	const layerL = viewerL.imageryLayers.addImageryProvider(
	  new Cesium.IonImageryProvider({ assetId: 1640455 })//get your asset ID from "My Assets" menu
	);
	
	// Hide bottom widgets
	viewerL.timeline.container.style.visibility = "hidden";
	viewerL.animation.container.style.visibility = "hidden";

    // Fly the camera at the given longitude, latitude, and height.
    viewerL.camera.flyTo({
      destination : Cesium.Cartesian3.fromDegrees(-6.7200, 43.175, 6000),
      orientation : {
        heading : Cesium.Math.toRadians(15.0),
        pitch : Cesium.Math.toRadians(-20.0),
      }
    });
    
    // Invoke RIGHT view

    const viewerR = new Cesium.Viewer('cesiumContainer2', {
		terrainProvider: new Cesium.CesiumTerrainProvider({
			url: Cesium.IonResource.fromAssetId(1640502),//get your asset ID from "My Assets" menu
		}),	  
		baseLayerPicker: false,
		infoBox: false,
    });    

	// Add Landsat imagery
	const layerR = viewerR.imageryLayers.addImageryProvider(
	  new Cesium.IonImageryProvider({ assetId: 1640977 })
	);
	
	// Hide bottom widgets
	viewerR.timeline.container.style.visibility = "hidden";
	viewerR.animation.container.style.visibility = "hidden";

    // Fly the camera at the given longitude, latitude, and height.
    viewerR.camera.flyTo({
      destination : Cesium.Cartesian3.fromDegrees(-6.7200, 43.175, 6000),
      orientation : {
        heading : Cesium.Math.toRadians(15.0),
        pitch : Cesium.Math.toRadians(-20.0),
      }
    });
    
    // Invoke camera tracker
    //define a loop
    var camInterval=setInterval(function(){

	},200);
    clearInterval(camInterval);
    
    document.onmousedown=trackCam();
    document.ondragstart=trackCam();
    
    //define loop function (read properties from left camera and copy to right camera)
    function trackCam(){
		camInterval=setInterval(function(){
			viewerR.camera.setView({
				destination: Cesium.Cartesian3.fromElements(
					  viewerL.camera.position.x,
					  viewerL.camera.position.y,
					  viewerL.camera.position.z
					),
				orientation: {
					direction : new Cesium.Cartesian3(
						viewerL.camera.direction.x,
						viewerL.camera.direction.y,
						viewerL.camera.direction.z),
					up : new Cesium.Cartesian3(
						viewerL.camera.up.x,
						viewerL.camera.up.y,
						viewerL.camera.up.z)
				},
			});
		},50);
	};
	//stop loop listeners (release mouse or stop scroll)
	document.onmouseup=function(){
		clearInterval(camInterval);
	};
	document.ondragend=function(){
		clearInterval(camInterval);
	};
	
	//keep the copyright date updated
	var y=new Date(Date.now());
	document.getElementById("y").innerHTML=y.getFullYear();
  </script>
</div>
</html>

Three.js: a 3D model viewer for your site

I was writing a post where I wanted to insert a 3D model to picture it better, and I even thought in doing a viewer myself. But I didn’t have to browse long to find Three.js.

https://threejs.org/

It’s all invented!

For this example, I'll use the official CDN libraries, instead of downloading the files to a server.

Create a basic .html file as suggested in the documentation:

https://threejs.org/docs/index.html#manual/en/introduction/Creating-a-scene

<!DOCTYPE html>
<html>
	<head>
		<meta charset="utf-8">
		<title>My first three.js app</title>
		<style>
			body { margin: 0; }
		</style>
	</head>
	<body>
        <script async src="https://unpkg.com/[email protected]/dist/es-module-shims.js"></script>

        <script type="importmap">
          {
            "imports": {
              "three": "https://unpkg.com/[email protected]/build/three.module.js"
            }
          }
        </script>

        <script>
        //App code goes here
        </script>
	</body>
</html>

Create a scene

Let's keep with the example and fill up the second <script> block defining a scene with an animated rotating cube:

<script type="module">
        import * as THREE from 'three';
	const scene = new THREE.Scene();
	const camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 );

	const renderer = new THREE.WebGLRenderer();
	renderer.setSize( window.innerWidth, window.innerHeight );
	document.body.appendChild( renderer.domElement );

	const geometry = new THREE.BoxGeometry( 1, 1, 1 );
	const material = new THREE.MeshBasicMaterial( { color: 0x00ff00 } );
	const cube = new THREE.Mesh( geometry, material );
	scene.add( cube );

	camera.position.z = 5;

	function animate() {
		requestAnimationFrame( animate );

		cube.rotation.x += 0.01;
		cube.rotation.y += 0.01;

		renderer.render( scene, camera );
	};

	animate();
</script>

All of this would look like this:

Add drag controls and background

Now we have a base to work with. We can add some functionality inserting the OrbitControls module.

//Import new modules at the beginning of the script
import { OrbitControls } from 'https://unpkg.com/[email protected]/examples/jsm/controls/OrbitControls.js';

//then add the mouse controls after declaring the camera and renderer
const controls = new OrbitControls( camera, renderer.domElement );

Also, you can modify the background easily, but you will need to host your images within your app in a server, or run it locally, because of CORS. I will be using the background image of the blog header, which was taken from Stellarium.

First define a texture. Then, add it to the scene:

//do this before rendering, while defining the scene
//define texture
const texture = new THREE.TextureLoader().load( "https://theroamingworkshop.cloud/demos/Unity1-north.png" );

//add texture to scene
scene.background=texture;

Full code:

<!DOCTYPE html>
<html>
	<head>
		<meta charset="utf-8">
		<title>My first three.js app</title>
		<style>
			body { margin: 0; }
		</style>
	</head>
	<body>
        <script async src="https://unpkg.com/[email protected]/dist/es-module-shims.js"></script>

        <script type="importmap">
          {
            "imports": {
              "three": "https://unpkg.com/[email protected]/build/three.module.js"
            }
          }
        </script>
<body style="margin: 0; width:100%;height:300px;">
        <script async src="https://unpkg.com/[email protected]/dist/es-module-shims.js"></script>

        <script type="importmap">
          {
            "imports": {
              "three": "https://unpkg.com/[email protected]/build/three.module.js"
            }
          }
        </script>

    <script type="module">

    import * as THREE from 'three';
	import { OrbitControls } from 'https://unpkg.com/[email protected]/examples/jsm/controls/OrbitControls.js';

	const scene = new THREE.Scene();
    
	const texture = new THREE.TextureLoader().load( "https://theroamingworkshop.cloud/demos/Unity1-north.png" );
	scene.background=texture;

    const camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 );

	const renderer = new THREE.WebGLRenderer();
	renderer.setSize( window.innerWidth, window.innerHeight );
	document.body.appendChild( renderer.domElement );
    
    const controls = new OrbitControls( camera, renderer.domElement );

	const geometry = new THREE.BoxGeometry( 1, 1, 1 );
	const material = new THREE.MeshBasicMaterial( { color: 0x00ff00 } );
	const cube = new THREE.Mesh( geometry, material );
	scene.add( cube );

	camera.position.z = 5;

	function animate() {
		requestAnimationFrame( animate );

		cube.rotation.x += 0.01;
		cube.rotation.y += 0.01;

		renderer.render( scene, camera );
	};

	animate();
</script>
</body>
</html>

Insert a 3D model

Now let's replace the cube for our own 3D model which, in the case of Three.js, will be a glTF (.GLB o .GLTF) format, that is most supported and renders faster (.fbx, .stl, .obj and so on are also supported).

I will export a .glb of this basic Raspberry Pi 4B case that I did some time ago using Blender:

Now, replace the <script> block based on the "webgl_loader_gltf" which was shown at the start of the post:

<script type="module">
import * as THREE from 'three';
import { GLTFLoader } from 'https://unpkg.com/[email protected]/examples/jsm/loaders/GLTFLoader.js';
import { OrbitControls } from 'https://unpkg.com/[email protected]/examples/jsm/controls/OrbitControls.js';

let camera, scene, renderer;

init();
render();

function init() {

	const container = document.createElement( 'div' );
	document.body.appendChild( container );

	camera = new THREE.PerspectiveCamera( 30, window.innerWidth / window.innerHeight, 0.1, 20 );
    camera.position.set( 0.2, 0.2, 0.2 );

	scene = new THREE.Scene();        
    scene.add( new THREE.AmbientLight( 0xffffff, 0.75 ) );

	const dirLight = new THREE.DirectionalLight( 0xffffff, 1 );
	dirLight.position.set( 5, 10, 7.5 );
	dirLight.castShadow = true;
	dirLight.shadow.camera.right = 2;
	dirLight.shadow.camera.left = - 2;
	dirLight.shadow.camera.top	= 2;
	dirLight.shadow.camera.bottom = - 2;
	dirLight.shadow.mapSize.width = 1024;
	dirLight.shadow.mapSize.height = 1024;
	scene.add( dirLight );

    //model
     const loader = new GLTFLoader();
	 loader.load( 'https://theroamingworkshop.cloud/threeJS/models/rPi4case/rPi4_case_v1.glb', function ( gltf ) {
		scene.add( gltf.scene );
		render();
	 } );

	renderer = new THREE.WebGLRenderer( { antialias: true } );
            
	renderer.setPixelRatio( window.devicePixelRatio );
	renderer.setSize( window.innerWidth, window.innerHeight );
	renderer.toneMapping = THREE.ACESFilmicToneMapping;
	renderer.toneMappingExposure = 1;
	renderer.outputEncoding = THREE.sRGBEncoding;
	container.appendChild( renderer.domElement );

	const controls = new OrbitControls( camera, renderer.domElement );
	controls.addEventListener( 'change', render );
    controls.minDistance = 0.001;
	controls.maxDistance = 1;
	controls.target.set( 0.03, 0.01, -0.01 );
	controls.update();
	window.addEventListener( 'resize', onWindowResize );
}
function onWindowResize() {
	camera.aspect = window.innerWidth / window.innerHeight;
	camera.updateProjectionMatrix();
	renderer.setSize( window.innerWidth, window.innerHeight );
	render();
}
function render() {
	renderer.render( scene, camera );
}
</script>

Basically it does the following:

  • Import used modules:
    • GLTFLoader will load our model in .glb format
    • OrbitControls for camera rotation and position
  • Define the scene:
    • define a camera
    • define the light (in this case, ambient and directional; try commenting each of them to see the difference)
    • define a background
  • Load the model in the scene
  • Define rendering parameter and render.

All of it finally looks like this (click and drag!):

Hope it's useful! Doubts or comments on 🐦 Twitter!

🐦 @RoamingWorkshop

Get average sea temperature (or any other raster map data)

UPDATE 2023! Hourly WMS service removed and daily version updated to V2. WMS version updated to 1.3.0

Recently it’s been trendy to talk about the increased warming of the Sea, and I’ve found very few maps where you can actually click and check the temperature it pretends to indicate.

So I’ve done my own:

  1. I’m going to find a WMS with the required raster information,
  2. Replicate the legend using html <canvas> elements,
  3. And obtain the temperature value matching the pixel color value.

The data

There are several and confusing information sources:

  • Puertos del Estado:
    My first attempt was to check the Spanish State Ports website, but the only good thing I found was that they use Leaflet.. They have a decent real-time viewer, but it's really complicated to access the data to do something with them (although I'll show you how in the future).
  • Copernicus:
    "Copernicus is the European Union's Earth observation programme", as they indicate in their website, and they gather the majority of geographical and environmental information produced in the EU member countries.
    In their marine section we'll find Sea Surface Temperature (SST) data obtained from satellite imagery and other instruments from different euopean organisms.

Global Ocean OSTIA Diurnal Skin Sea Surface Temperature

After trying them all, the most complete is the global map produced by the MetOffice.

As they indicate, it shows the hourly average in a gap-free map that uses in-situ, satellite and infrared radiometry.

The basemap

I'll create a basemap as I've shown in other posts.

As I'm adding a raster WMS, consisting of low resolution pixels, I'll add a layer with world countries from Eurostat.

I'll download it in geoJSON format and convert it into a variable called "countries" in a new file CNTR_RG_01M_2020_4326.js . I need to insert the following heading so it's read correctly (cole the JSON variable with "]}" so it's read correctly).

var countries = {
"type": "FeatureCollection",
"crs": { "type": "name", "properties": { "name": "urn:ogc:def:crs:OGC:1.3:CRS84" } },
"features": [
{"type": "FeatureCollection", "features": [{"type": "Feature", "geometry": {"type": "MultiPolygon", "coordinates": [[[[51.590556, 24.242975],   ......   ]}

Once created the file, we can link it in our .html <head>:

<script src="./CNTR_RG_01M_2020_4326.js"></script>

TIP! If you have issues creating this file, you can download it from this server or add the full link to the head script

This basemap is looking like this:

SST raster data
<!DOCTYPE html>
<html>
<head>

<title>SST raster data</title>

<link rel="icon" href="data:image/svg+xml,<svg xmlns=%22http://www.w3.org/2000/svg%22 viewBox=%220 0 100 100%22><text y=%22.9em%22 font-size=%2290%22>🌡</text></svg>">

<link rel="stylesheet" href="https://unpkg.com/[email protected]/dist/leaflet.css"
   integrity="sha512-hoalWLoI8r4UszCkZ5kL8vayOGVae1oxXe/2A4AO6J9+580uKHDO3JdHb7NzwwzK5xr/Fs0W40kiNHxM9vyTtQ=="
   crossorigin=""/>
   
<script src="https://unpkg.com/[email protected]/dist/leaflet.js"
   integrity="sha512-BB3hKbKWOc9Ez/TAwyWxNXeoV9c1v6FIeYiBieIWkpLjauysF18NzgR1MBNBXf8/KABdlkX68nAhlwcDFLGPCQ=="
   crossorigin=""></script>

<script src="./CNTR_RG_01M_2020_4326.js"></script>

</head>

<style>
body{
	margin:0;
}
#base1{
	width:100vw;
	height:100vh;
}
</style>

<body>
	<div id="base1"></div>
</body>
<script>

	//Crear mapa de Leaflet
	var map = L.map('base1').setView([48, 10], 5);
	
	//Añadimos capa de paises
	L.geoJSON(countries, {	//usa la variable "countries" que está definida en el archivo 'CNTR_RG_01M_2020_4326.js'
		style: function(){	//sobreescribimos el estilo por defecto de Leaflet con algo más estético
			return {
			fillColor: "BurlyWood",
			color: "bisque",
			fillOpacity: 1,
			};
		}
	}).addTo(map);

</script>

</html>

Query and add the WMS

I'll add the global daily map by MetOffice given by the Copernicus project.

To query the details of the WMS, we must find the medatada file, which in this case is in the "DATA-ACCESS" tab.

In this file we make a search for "WMS" to find the link to the service.

And it's the following:

https://nrt.cmems-du.eu/thredds/wms/METOFFICE-GLO-SST-L4-NRT-OBS-SST-V2

We'll add the GeoServer functions "?service=WMS&request=GetCapabilities" which will show the available information in the WMS like layers, legends, styles or dimensions.

https://nrt.cmems-du.eu/thredds/wms/METOFFICE-GLO-SST-L4-NRT-OBS-SST-V2?service=WMS&request=GetCapabilities

Let's add the WMS layer as stated in Leaflet docs:

	var base = L.tileLayer.wms(URL, {
		layers: 'analysed_sst', //cambiar la capa. Comprobar con ?service=WMS&request=GetCapabilities
		format: 'image/png',
		transparent: true,
        styles: 'boxfill/rainbow',
        //time: '2022-08-02T16:30:00.000Z',
		attribution: "© <a target='_blank' href='https://resources.marine.copernicus.eu/product-detail/SST_GLO_SST_L4_NRT_OBSERVATIONS_010_001/INFORMATION'>Copernicus Marine Service & MetOffice ◳</a>"
	}).addTo(map);

Find the options values that you need in the metadata file. Specifically these tags:

  • <Layer queryable="1"><Name> will show the layer names that we can introduce in the proeprty "layers:"
  • <Style><Name> will show the name for the different styles available to represent the raster map and we'll introduce it in the property "styles:".
  • <LegenURL> shows a link to the legend used for that style.
  • <Dimension> will show the units that we can use when querying the WMS. In this case it's a time unit as we can vary the date that the map represents. I'll comment it for the moment so it will show the last available date.
SST raster data

Finally, let's add the legend to the map to have a visual reference.

It's inserted as an image in the <body> and some CSS in <style>:

<style>
body{
	margin:0;
}
#base{
	width:100%;
	height:350px;
}
.leyenda{
	position:absolute;
	width:110px;
	height:264px;
	top:5px;
	right:5px;
	z-Index:1001;
	display:flex;
	justify-content:flex-end;
}
</style>

<body>
    <div class="leyenda">
		<img src="https://nrt.cmems-du.eu/thredds/wms/METOFFICE-GLO-SST-L4-NRT-OBS-SST-V2?REQUEST=GetLegendGraphic&LAYER=analysed_sst&PALETTE=rainbow&transparent=true"></img>
	</div>

	<div id="base"></div>
</body>
SST raster data

As you can see, it's hard to try and figure out a value for any of the pixels in the map.

Mimic the legend to query the data

To know the temperature that a pixel represents, we should know the position of such pixel color in the legend, and its proportional value compared to the extreme values (310kPa= 36.85ºC y 210kPa=-3.15ºC).

The problem is that CORS policy won't let you query the image as it's outside our domain. On the other hand, if we add the image to our domain, it will have a low resolution and lose precision in the colors that it shows.

That's why I'll mimic the legend using a <canvas> element.

    <div class="leyenda">
		<canvas id="gradientC"></canvas>
		<img src="https://nrt.cmems-du.eu/thredds/wms/METOFFICE-GLO-SST-L4-NRT-OBS-SST-V2?REQUEST=GetLegendGraphic&LAYER=analysed_sst&PALETTE=rainbow&transparent=true"></img>
	</div>

Using javascript, and the HTML Color Picker, we'll define the different color stops in a "gradient" type fill that will replace the original legend.

	function grad(){
		//Generamos la leyenda en el canvas 'gradientC'
		var ctx=document.getElementById("gradientC").getContext('2d');
		//Definimos un gradiente lineal
		var grd=ctx.createLinearGradient(0,150,0,0);
		//Calibrar las paradas del gradiente tomando muestras de la imagen
		//Descomentar la imagen que va junto al canvas en el html
		//Usar HTML color picker: https://www.w3schools.com/colors/colors_picker.asp
		grd.addColorStop(0, "rgb(0, 0, 146)");			//0 -> -3.15
		grd.addColorStop(0.09, "rgb(0, 0, 247)");		//1
		grd.addColorStop(0.185, "rgb(1, 61, 255)");		//2
		grd.addColorStop(0.26, "rgb(0, 146, 254)");		//3
		grd.addColorStop(0.3075, "rgb(0, 183, 255)");		//4
		grd.addColorStop(0.375, "rgb(3, 251, 252)");	//5
		grd.addColorStop(0.5, "rgb(111, 255, 144)");	//6 -> 20.0
		grd.addColorStop(0.575, "rgb(191, 255, 62)");	//7
		grd.addColorStop(0.64, "rgb(255, 255, 30)");	//8
		grd.addColorStop(0.74, "rgb(255, 162, 1)");		//9
		grd.addColorStop(0.805, "rgb(255, 83, 0)");		//10
		grd.addColorStop(0.90, "rgb(252, 4, 1)");		//11
		grd.addColorStop(1, "rgb(144, 0, 0)");			//12 -> 36.85
		
		//añadir el gradiente al canvas
		ctx.fillStyle = grd;
		ctx.fillRect(0,0,255,255);
	}
	
	//ejecutamos la funcion del gradiente al inicio
	grad();
SST raster data

Pixel selector creation and temperature calculator

Once we have our own legend, we create another element that shows the selected pixel as well as the temperature obtained.

To use the pixel color, we'll add another <canvas>:

	<canvas id="temp">
		<img id="pixel" src="" ALT="CLICK PARA OBTENER TEMPERATURA"></img>
	</canvas>
	
	<div id="tempTxt">PINCHA SOBRE EL MAR PARA CONOCER SU TEMPERATURA</div>

And format the new elements in <style>:

#temp{
	position:absolute;
	bottom:10px;
	left:10px;
	width:150px;
	height:150px;
	background-color: lightgrey;
	border: 2px solid white;
	border-radius: 5px;
	z-Index: 1001;
}
#tempTxt{
	display:flex;
	position:absolute;
	bottom:10px;
	left:10px;
	width:150px;
	height:150px;
	padding: 2px 2px 2px 2px;
	z-Index:1001;
	font-family:verdana;
	font-size:16px;
	font-weight:bold;
	text-align:center;
	align-items:center;
    word-wrap:break-word;
    word-break:break-word;
}
SST raster data
CLICK PARA OBTENER TEMPERATURA
PINCHA SOBRE EL MAR PARA CONOCER SU TEMPERATURA

Now we add two functions:

  • onMapClick() to obtain the selected pixel and pass it to the lower box. We use a GET request of the WMS service using the coordinates of the pixel location. It's important to take into account the reference system which is not the usual (EPSG:3857) for unit conversions.
	//Añadimos función al hacer click en el mapa
	map.addEventListener('click', onMapClick);
	
	function onMapClick(e) {
		//ejecutamos la función grad con cada click
		grad();
		//Obtenemos las coordenadas del pinto seleccionado
        var latlngStr = '(' + e.latlng.lat.toFixed(3) + ', ' + e.latlng.lng.toFixed(3) + ')';
		//console.log(latlngStr);

		//Definir el CRS para enviar la consulta al WMS
		const proj = L.CRS.EPSG3857;
        //const proj = L.CRS.EPSG4326;
		
		//Definimos los límites del mapa que pediremos al WMS para que sea aproximadamente de 1 pixel
		var BBOX=((proj.project(e.latlng).x)-10)+","+((proj.project(e.latlng).y)-10)+","+((proj.project(e.latlng).x)+10)+","+((proj.project(e.latlng).y)+10);
		
		//console.log(BBOX);
		
		//Restablecemos la imagen en cada click
		var tTxt=document.getElementById("tempTxt");
		var pix= document.getElementById("pixel");
		var ctx=document.getElementById("temp").getContext("2d");
		//pix.src="";
		ctx.fillStyle="lightgrey";
		ctx.fillRect(0,0,300,300);
		
		//Realizamos la petición del pixel seleccionado
		var xPix= new XMLHttpRequest();
		xPix.onreadystatechange = function(){
			if (this.readyState == 4 && this.status == 200) {
				pix.src=URL+WMS+BBOX;
				pix.onload=function(){
					ctx.drawImage(pix,0,0,300,300);
					tTxt.innerHTML="INTERPRETANDO LEYENDA...";
					//Interpretamos el pixel según la leyenda
					leyenda();
				}
				pix.crossOrigin="anonymous";
			}
		};
		
		xPix.open("GET", URL+WMS+BBOX);
		xPix.send();
		
		tTxt.innerHTML="CARGANDO TEMPERATURA...";
		



    }
  • leyenda() calculates the temperature of the selected pixel according to our legend. It shows the value in the lower box and also adds a white indication mark in the legend.
    The calculation algorithm consists in running the legend pixel by pixel (vertically) and comparing the difference of the rgb(x,y,z) values of the legend with the rgb values in the selected pixel. We'll be keeping the minimum difference until we reach the last pixel, so there will be situations where the solution is not 100% exact. This is not the best way, but it's fast (to understand and execute) and quite effective.
	function leyenda(){
		var ctx=document.getElementById("temp").getContext("2d");
		var tTxt=document.getElementById("tempTxt");
		//obtenemos el valor RGB del pixel seleccionado
		var RGB=ctx.getImageData(5,5,1,-1).data;
		//console.log(ctx.getImageData(10,10,1,-1).data);
		
		var key=document.getElementById("gradientC").getContext("2d");
		var max=150;
		
		var min=1000;//la máxima diferencia sólo puede ser de 255x3=765
		var dif="";
		var val="";
		
		//recorremos el gradiente de la leyenda pixel a pixel para obtener el valor de temperatura
		for(var p=1;p<=max;p++){
			//obtenemos el valor actual
			var temp=key.getImageData(1,p,1,-1).data;
			//console.log(temp);
			//comparamos con el seleccionado y obtenemos la diferencia total
			dif=Math.abs(parseInt(temp[0])-parseInt(RGB[0]))+Math.abs(parseInt(temp[1])-parseInt(RGB[1]))+Math.abs(parseInt(temp[2])-parseInt(RGB[2]));
			if(dif<min){
				min=dif;
				val=p;
				//console.log("Obj:"+RGB[0]+","+RGB[1]+","+RGB[2]+"\nTemp:"+temp[0]+","+temp[1]+","+temp[2]+"\nDif:"+dif);
			}
		}
		var T=36.85-(val*40/max);
		T=T.toFixed(2);
		//pintamos una línea de referencia en la leyenda
		key.fillStyle="white";
		key.fillRect(0,val,255,1);
		
		//definimos la temperatura en el texto
		//si el color da gris, hemos pinchado en la tierra
		//console.log("T= "+T);
		if(RGB[0]==211&RGB[1]==211&RGB[2]==211){
			tTxt.innerHTML="PINCHA SOBRE EL MAR PARA CONOCER SU TEMPERATURA";
		}else if(typeof T == "undefined"){
			tTxt.innerHTML="¡ERROR!<BR>PRUEBA OTRO PUNTO DEL MAR.";
		}else{
			tTxt.innerHTML="TEMPERATURA APROXIMADA: <br><br>"+T+" ºC";
		}
		//console.log(key.getImageData(1,150,1,-1).data);
	}

We also ommit the original legend and add some custom text.

Finally, we add a date controller in the <body>:

    <div id="timeBlock">
        <button id="reini" onclick="reiniT()">↺</button>
	    <input type="datetime-local" id="timeInp" name="maptime">
    </div>

And it's CSS:

#tempTxt{
	display:flex;
	position:absolute;
	bottom:10px;
	left:10px;
	width:150px;
	height:150px;
	padding: 2px 2px 2px 2px;
	z-Index:1001;
	font-family:verdana;
	font-size:16px;
	font-weight:bold;
	text-align:center;
	align-items:center;
    word-wrap:break-word;
    word-break:break-word;
}
#timeBlock{
    display:flex;
    flex-direction:row;
	position:absolute;
    width:250px;
    top:10px;
    left:50%;
    margin-left:-125px;
    justify-content:center;
	z-Index:1002;
    font-family:verdana;
    font-size:14px;
}
#timeInp{
	width:200px;
    height:30px;
	text-align:center;
}
#reini{
    width:35px;
    height:35px;
    margin-right:5px;
    font-weight:bold;
    font-size:18px;
    padding:0;
}

As well as a few javascript lines to:

  • get the last available date and display it,
  • update the map when there's a change
  • and define maximum and minimum allowed values
	//Obtener última hora de actualizacion del mapa
	var timeInp=document.getElementById("timeInp");
	var t;
    var maxT;
    var minT;
	var xTime = new XMLHttpRequest();
		xTime.onreadystatechange = function(){
			if (this.readyState == 4 && this.status == 200) {
				//convertimos el XML según https://www.w3schools.com/xml/xml_parser.asp
                var xmlDoc=this.responseXML;
                t=xmlDoc.children[0].children[1].children[2].children[12].children[1].children[5].attributes[4].value;
                //Lo convertimos en un objeto fecha quitando los segundos
        		t=new Date(t);
                t=t.toISOString().substring(0,t.toISOString().length-8);
                //Lo pasamos al selector y lo establecemos como máximo
                timeInp.max=t;
                timeInp.value=t;
                maxT=new Date(t);

                //también establecemos el minimo
                t=xmlDoc.children[0].children[1].children[2].children[12].children[1].children[5].innerHTML.trim();
                t=t.substring(0,16);
                timeInp.min=t;
                minT=new Date(t);
			}
		};
		
		xTime.open("GET", URL+"?service=WMS&request=GetCapabilities");
		xTime.send();
	
	//Selector de fecha para WMS
		
	timeInp.addEventListener("change", function(){
		t=new Date(timeInp.value.toString());
		t.setHours(12-t.getTimezoneOffset()/60);
        t=t.toISOString().substring(0,t.toISOString().length-8);
	    timeInp.value=t;
	    t=new Date(timeInp.value);
            t.setHours(12-t.getTimezoneOffset()/60);

        //si estamos en el rango de datos..
        if(t>=minT && t<=maxT){
		    t=t.toISOString();
            //actualziamos el mapa
		    base.setParams({
			    time: t,
			    styles: "boxfill/rainbow",
		    },0);
        }else{//mostramos error
            alert("La fecha introducida está fuera de rango.");
        }
	});

    //funcion de reinicio de fecha
    function reiniT(){
        timeInp.value=maxT.toISOString().substring(0,maxT.toISOString().length-13)+"23:30";
    }

Result

Joining it all, you'll end up with something like this, which you can see fullscreen here:

If you liked this, have any doubts or any idea to improve this or other maps, drop your thought in Twitter 🐦

🐦 @RoamingWorkshop

I'll leave the full code next. See you next!

<!DOCTYPE html>
<html>
<head>
<meta charset='UTF-8'\>
<title>SST raster data</title>

<link rel="icon" href="data:image/svg+xml,<svg xmlns=%22http://www.w3.org/2000/svg%22 viewBox=%220 0 100 100%22><text y=%22.9em%22 font-size=%2290%22>🌡</text></svg>">

<link rel="stylesheet" href="https://unpkg.com/[email protected]/dist/leaflet.css"
   integrity="sha512-hoalWLoI8r4UszCkZ5kL8vayOGVae1oxXe/2A4AO6J9+580uKHDO3JdHb7NzwwzK5xr/Fs0W40kiNHxM9vyTtQ=="
   crossorigin=""/>
   
<script src="https://unpkg.com/[email protected]/dist/leaflet.js"
   integrity="sha512-BB3hKbKWOc9Ez/TAwyWxNXeoV9c1v6FIeYiBieIWkpLjauysF18NzgR1MBNBXf8/KABdlkX68nAhlwcDFLGPCQ=="
   crossorigin=""></script>

<script src="./CNTR_RG_01M_2020_4326.js"></script>

</head>

<style>
body{
	margin:0;
}
#temp{
	position:absolute;
	bottom:10px;
	left:10px;
	width:150px;
	height:150px;
	background-color: lightgrey;
	border: 2px solid white;
	border-radius: 5px;
	z-Index: 1001;
}
#map{
	width:100vw;
	height:100vh;
}
.leyenda{
	position:absolute;
	top:10px;
	z-Index:1001;
	display:flex;
	flex-direction:row;
	width:50px;
	height:255px;
	right: 50px;
	font-family:verdana;
	font-size:11px;
}
.leyenda-txt{
	display:flex;
	flex-direction:column;
	align-items: stretch;
	justify-content:center;
	text-align:right;
	font-weight:bold;
	text-shadow: 0 0 6px white;
	margin: 0px 3px 0px 3px;
}
.leyenda-val{
	height:255px;
	margin-top:-5px;
}
#gradientC{
	z-Index=1002;
	width:25px;
	height:255px;
	margin:0;
}
#tempTxt{
	display:flex;
	position:absolute;
	bottom:10px;
	left:10px;
	width:150px;
	height:150px;
	padding: 2px 2px 2px 2px;
	z-Index:1001;
	font-family:verdana;
	font-size:16px;
	font-weight:bold;
	text-align:center;
	align-items:center;
    word-wrap:break-word;
    word-break:break-word;
}
#timeBlock{
    display:flex;
    flex-direction:row;
	position:absolute;
    width:250px;
    top:10px;
    left:50%;
    margin-left:-125px;
    justify-content:center;
	z-Index:1002;
    font-family:verdana;
    font-size:14px;
}
#timeInp{
	width:200px;
    height:30px;
	text-align:center;
}
#reini{
    width:35px;
    height:35px;
    margin-right:5px;
    font-weight:bold;
    font-size:18px;
    padding:0;
}
</style>

<body>

    <div id="timeBlock">
        <button id="reini" onclick="reiniT()">↺</button>
	    <input type="datetime-local" id="timeInp" name="maptime">
    </div>

	<div class="leyenda">
		<div class="leyenda-txt">
			<div class="leyenda-val">36.85</div>
			<div class="leyenda-val">20.0</div>
			<div>-3.15</div>
			</div>
		<!--Descomentar para mostrar la imagen descargada de la leyenda y calibrar el gradiente-->
		<!--<img src="leyenda.jpg"></img>-->
		
		<canvas id="gradientC"></canvas>
			
		<div class="leyenda-txt" style="margin-top:-10px;"><div >(ºC)</div></div>
	</div>
	
	<canvas id="temp">
		<img id="pixel" src="" ALT="CLICK PARA OBTENER TEMPERATURA"></img>
	</canvas>
	
	<div id="tempTxt">PINCHA SOBRE EL MAR PARA CONOCER SU TEMPERATURA</div>
	
	<div id="map"></div>
</body>
<script>

	//Crear mapa de Leaflet
	var map = L.map('map').setView([48, 10], 5);
	
	//Añadimos capa de paises
	L.geoJSON(countries, {	//usa la variable "countries" que está definida en el archivo 'CNTR_RG_01M_2020_4326.js'
		style: function(){	//sobreescribimos el estilo por defecto de Leaflet con algo más estético
			return {
			fillColor: "BurlyWood",
			color: "bisque",
			fillOpacity: 1,
			};
		}
	}).addTo(map);
	
	//Añadimos servicio WMS siguiendo https://leafletjs.com/examples/wms/wms.html
	
	//OSTIA Global daily mean SST
	var URL="https://nrt.cmems-du.eu/thredds/wms/METOFFICE-GLO-SST-L4-NRT-OBS-SST-V2"
	var WMS = '?service=WMS&request=GetMap&version=1.3.0&layers=analysed_sst&styles=&format=image%2Fpng&transparent=true&width=200&height=200&CRS=EPSG:3857&bbox=';
	
	//OSTIA Global hourly mean diurnal skin SST
	//var URL="https://nrt.cmems-du.eu/thredds/wms/METOFFICE-GLO-SST-L4-NRT-OBS-SKIN-DIU-FV01.1"
	//var WMS = '?service=WMS&request=GetMap&version=1.1.1&layers=analysed_sst&styles=&format=image%2Fpng&transparent=true&width=200&height=200&srs=EPSG:3857&bbox=';

    //Global Ocean - SST Multi-sensor L3 Observations
    //var URL="http://nrt.cmems-du.eu/thredds/wms/IFREMER-GLOB-SST-L3-NRT-OBS_FULL_TIME_SERIE";
    //var WMS="?service=WMS&request=GetMap&version=1.1.1&layers=adjusted_sea_surface_temperature&styles=&format=image%2Fpng&transparent=true&width=200&height=200&srs=EPSG:4326&bbox=";
	
	var base = L.tileLayer.wms(URL, {
		layers: 'analysed_sst', //cambiar la capa. Comprobar con ?service=WMS&request=GetCapabilities
		format: 'image/png',
		transparent: true,
        styles: 'boxfill/rainbow',
    	version: "1.3.0",
        //time: '2022-08-02T16:00:00.000Z',
		attribution: "© <a target='_blank' href='https://resources.marine.copernicus.eu/product-detail/SST_GLO_SST_L4_NRT_OBSERVATIONS_010_001/INFORMATION'>Copernicus Marine Service & MetOffice ◳</a>"
	}).addTo(map);
	
	//Obtener última hora de actualizacion del mapa
	var timeInp=document.getElementById("timeInp");
	var t;
    var maxT;
    var minT;
	var xTime = new XMLHttpRequest();
		xTime.onreadystatechange = function(){
			if (this.readyState == 4 && this.status == 200) {
				//convertimos el XML según https://www.w3schools.com/xml/xml_parser.asp
                var xmlDoc=this.responseXML;
                t=xmlDoc.children[0].children[1].children[2].children[12].children[1].children[5].attributes[4].value;
                //Lo convertimos en un objeto fecha quitando los segundos
        		t=new Date(t);
                t=t.toISOString().substring(0,t.toISOString().length-8);
            	console.log(t);
                //Lo pasamos al selector y lo establecemos como máximo
                timeInp.max=t;
                timeInp.value=t;
                maxT=new Date(t);

                //también establecemos el minimo
                t=xmlDoc.children[0].children[1].children[2].children[12].children[1].children[5].innerHTML.trim();
                t=t.substring(0,16);
                timeInp.min=t;
                minT=new Date(t);
			}
		};
		
		xTime.open("GET", URL+"?service=WMS&request=GetCapabilities");
		xTime.send();
	
	//Selector de fecha para WMS
		
	timeInp.addEventListener("change", function(){
		t=new Date(timeInp.value.toString());
        t.setHours(12-t.getTimezoneOffset()/60);
        t=t.toISOString().substring(0,t.toISOString().length-8);
	    timeInp.value=t;
	    t=new Date(timeInp.value);
    	t.setHours(12-t.getTimezoneOffset()/60);
        //t=t.toISOString().substring(0,t.toISOString().length-13)+"14:00";

        //si estamos en el rango de datos..
        if(t>=minT && t<=maxT){
		    t=t.toISOString();
            //actualziamos el mapa
		    base.setParams({
			    time: t,
			    styles: "boxfill/rainbow",
		    },0);
        }else{//mostramos error
            alert("La fecha introducida está fuera de rango.");
        }
	});

    //funcion de reinicio de fecha
    function reiniT(){
        timeInp.value=maxT.toISOString().substring(0,maxT.toISOString().length-13)+"12:00";
    }

    //Generar la leyenda
	
	function grad(){
		//Generamos la leyenda en el canvas 'gradientC'
		var ctx=document.getElementById("gradientC").getContext('2d');
		//Definimos un gradiente lineal
		var grd=ctx.createLinearGradient(0,150,0,0);
		//Calibrar las paradas del gradiente tomando muestras de la imagen
		//Descomentar la imagen que va junto al canvas en el html
		//Usar HTML color picker: https://www.w3schools.com/colors/colors_picker.asp
		grd.addColorStop(0, "rgb(0, 0, 146)");			//0 -> -3.15
		grd.addColorStop(0.09, "rgb(0, 0, 247)");		//1
		grd.addColorStop(0.185, "rgb(1, 61, 255)");		//2
		grd.addColorStop(0.26, "rgb(0, 146, 254)");		//3
		grd.addColorStop(0.3075, "rgb(0, 183, 255)");		//4
		grd.addColorStop(0.375, "rgb(3, 251, 252)");	//5
		grd.addColorStop(0.5, "rgb(111, 255, 144)");	//6 -> 20.0
		grd.addColorStop(0.575, "rgb(191, 255, 62)");	//7
		grd.addColorStop(0.64, "rgb(255, 255, 30)");	//8
		grd.addColorStop(0.74, "rgb(255, 162, 1)");		//9
		grd.addColorStop(0.805, "rgb(255, 83, 0)");		//10
		grd.addColorStop(0.90, "rgb(252, 4, 1)");		//11
		grd.addColorStop(1, "rgb(144, 0, 0)");			//12 -> 36.85
		
		//añadir el gradiente al canvas
		ctx.fillStyle = grd;
		ctx.fillRect(0,0,255,255);
	}
	
	//ejecutamos la funcion del gradiente al inicio
	grad();
	
	//Añadimos función al hacer click en el mapa
	map.addEventListener('click', onMapClick);
	
	function onMapClick(e) {
		//ejecutamos la función grad con cada click
		grad();
		//Obtenemos las coordenadas del pinto seleccionado
        var latlngStr = '(' + e.latlng.lat.toFixed(3) + ', ' + e.latlng.lng.toFixed(3) + ')';
		//console.log(latlngStr);

		//Definir el CRS para enviar la consulta al WMS
		const proj = L.CRS.EPSG3857;
        //const proj = L.CRS.EPSG4326;
		
		//Definimos los límites del mapa que pediremos al WMS para que sea aproximadamente de 1 pixel
		var BBOX=((proj.project(e.latlng).x)-10)+","+((proj.project(e.latlng).y)-10)+","+((proj.project(e.latlng).x)+10)+","+((proj.project(e.latlng).y)+10);
		
		//Restablecemos la imagen en cada click
		var tTxt=document.getElementById("tempTxt");
		var pix= document.getElementById("pixel");
		var ctx=document.getElementById("temp").getContext("2d");
		ctx.fillStyle="lightgrey";
		ctx.fillRect(0,0,300,300);
		
		//Realizamos la petición del pixel seleccionado
		var xPix= new XMLHttpRequest();
		xPix.onreadystatechange = function(){
			if (this.readyState == 4 && this.status == 200) {
				pix.src=URL+WMS+BBOX;
				pix.onload=function(){
					ctx.drawImage(pix,0,0,300,300);
					tTxt.innerHTML="INTERPRETANDO LEYENDA...";
					//Interpretamos el pixel según la leyenda
					leyenda();
				}
				pix.crossOrigin="anonymous";
			}
		};
		
		xPix.open("GET", URL+WMS+BBOX);
		xPix.send();
		
		tTxt.innerHTML="CARGANDO TEMPERATURA...";
		



    }
	function leyenda(){
		var ctx=document.getElementById("temp").getContext("2d");
		var tTxt=document.getElementById("tempTxt");
		//obtenemos el valor RGB del pixel seleccionado
		var RGB=ctx.getImageData(5,5,1,-1).data;
		
		var key=document.getElementById("gradientC").getContext("2d");
		var max=150;
		
		var min=1000;//la máxima diferencia sólo puede ser de 255x3=765
		var dif="";
		var val="";
		
		//recorremos el gradiente de la leyenda pixel a pixel para obtener el valor de temperatura
		for(var p=1;p<=max;p++){
			//obtenemos el valor actual
			var temp=key.getImageData(1,p,1,-1).data;
			//comparamos con el seleccionado y obtenemos la diferencia total
			dif=Math.abs(parseInt(temp[0])-parseInt(RGB[0]))+Math.abs(parseInt(temp[1])-parseInt(RGB[1]))+Math.abs(parseInt(temp[2])-parseInt(RGB[2]));
			if(dif<min){
				min=dif;
				val=p;
			}
		}
		var T=36.85-(val*40/max);
		T=T.toFixed(2);
		//pintamos una línea de referencia en la leyenda
		key.fillStyle="#ffffff";
		key.fillRect(0,val,255,1);
		
		//definimos la temperatura en el texto
		//si el color da gris, hemos pinchado en la tierra
		if(RGB[0]==211&RGB[1]==211&RGB[2]==211){
			tTxt.innerHTML="PINCHA SOBRE EL MAR PARA CONOCER SU TEMPERATURA";
		}else if(typeof T == "undefined"){
			tTxt.innerHTML="¡ERROR!<BR>PRUEBA OTRO PUNTO DEL MAR.";
		}else{
			tTxt.innerHTML="TEMPERATURA APROXIMADA: <br><br>"+T+" ºC";
		}
	
	}

</script>

</html>

All AEMET station data on one click in your phone

The Spanish Meteorological State Agency (AEMET) provides an open data API that we can use to access most of the data published in their website.

AEMET OpenData API

This way, any user can create simple apps only with the information needed and without accessing their website.

In my case, I want to show station data intuitively in a web map that can be accessed from any internet device, such as a smartphone.

Here’s a preview to keep you reading:

Define a base map

I'll use all the potential of Leaflet JS and the open maps from the Spanish National Geographic Institute.

In this case, we only need a simple map:

  1. Create an .html document with its basic structure:
<!DOCTYPE HTML>
<html>

<head>

<style>

</style>

</head>

<body>

</body>

<script>

</script>

</html>
  1. Link the Leaflet libraries inside <head> tags:

I also add <meta> charset(1) and viewport(2) properties to (1)correctly read special characters in the data and (2) adjust the window correctly for mobile devices.

<head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">

    <link rel="stylesheet" href="https://unpkg.com/[email protected]/dist/leaflet.css"
       integrity="sha512-xodZBNTC5n17Xt2atTPuE1HxjVMSvLVW9ocqUKLsCC5CXdbqCmblAshOMAS6/keqq/sMZMZ19scR4PsZChSR7A=="
       crossorigin=""/>
    <script src="https://unpkg.com/[email protected]/dist/leaflet.js"
       integrity="sha512-XQoYMqMTK8LvdxXYG3nZ448hOEQiglfqkJs1NOQV44cWnUrBc8PkAOcXy20w0vlaXaVUearIOBhiXZ5V3ynxwA=="
       crossorigin=""></script>

<style>

</style>

</head>
  1. Define the Leaflet map element inside the <body>
<body style="border:0;margin:0;">
    
    <div id="mapa" style="width:100vw; height:100vh;z-index:0"></div>

</body>

As you can see, I add border:0 and margin:0 to <body> style. This adjusts the map to fit the window without white spaces.

Height:100vh and width:100vw adjust the map to fit the size of the screen, even when we resize (view height and view width). z-index will be used to place elements in order and avoid overlays.

  1. Now we can generate the map with javascript in the <script> block.

With L.map we define the map and the initial view (centered in Madrid).

With L.tileLayer.wms we add the WMS service to get an "onlin" basemap. You can find the URLs for each service in the metadata files that come with each product.

It's also usual that WMS come with several layers and we need to define one. We can get a list of layers using the GetCapabilities WMS function. In my case, its the <Layer> with <Name> "IGNBaseTodo-gris" found here:

https://www.ign.es/wms-inspire/ign-base?service=WMS&request=GetCapabilities

<script>
var o = L.map('mapa').setView([40.5, -3.5], 8);

var IGN = window.L.tileLayer.wms("https://www.ign.es/wms-inspire/ign-base", {
    		layers: 'IGNBaseTodo-gris',
    		format: 'image/png',
    		transparent: true,
    		attribution: "BCN IGN © 2022"
		});
IGN.addTo(o);
</script>

Joining all this we'll have a basic map:

App leafMET.js

Now we add the plugin for downloading and plotting AEMET data (leafMET) that you can find in my github.

You can download leafMET.js and add it as a local script, or link it directly from this site. Inside the <head> tags we add the following:

<script type="application/javascript" src="https://theroamingworkshop.cloud/leafMET/leafMET.js"></script>

.html shortcut

Finally, to see the app in one click on a smartphone, you need to copy the content in leafMET.js and add it inside the <script> tags in the .html

This way, you'll have a single file with all it's needed to run the webapp. I added <title> and "icon" tags in the <head> to show a page title and an icon in the shortcut.

It should all look like this:

<!DOCTYPE HTML>
<html lang="en">

<head>
    <title>leafMET</title> 
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">

    <link rel="icon" href="data:image/svg+xml,<svg xmlns=%22http://www.w3.org/2000/svg%22 viewBox=%220 0 100 100%22><text y=%22.9em%22 font-size=%2290%22>🌤</text></svg>">

    <link rel="stylesheet" href="https://unpkg.com/[email protected]/dist/leaflet.css"
       integrity="sha512-xodZBNTC5n17Xt2atTPuE1HxjVMSvLVW9ocqUKLsCC5CXdbqCmblAshOMAS6/keqq/sMZMZ19scR4PsZChSR7A=="
       crossorigin=""/>
    <script src="https://unpkg.com/[email protected]/dist/leaflet.js"
       integrity="sha512-XQoYMqMTK8LvdxXYG3nZ448hOEQiglfqkJs1NOQV44cWnUrBc8PkAOcXy20w0vlaXaVUearIOBhiXZ5V3ynxwA=="
       crossorigin=""></script>

    <script type="application/javascript" src="https://theroamingworkshop.cloud/leafMET/leafMET.js"></script>

<style>


</style>
</head>

<body style="border:0;margin:0;">
    
    <div id="mapa" style="width:100vw; height:100vh;z-index:0"></div>

</body>

<script>
//Create Leaflet map and load OSM layer
var o = L.map('mapa').setView([40.5, -3.5], 8);

        var IGN = window.L.tileLayer.wms("https://www.ign.es/wms-inspire/ign-base", {
    		layers: 'IGNBaseTodo-gris',
    		format: 'image/png',
    		transparent: true,
    		attribution: "BCN IGN © 2022"
		});
IGN.addTo(o);


leafMET(o);
o.on('moveend', colocaEnLaVista);

var datosMET=null;
var sta=[];
var pinta="cross";
var ini=0;
var map="";

function leafMET(m){

    map=m;
    //Create button
    var btmet = window.document.createElement("BUTTON");
    btmet.id="btmet";
    btmet.title="Cargar datos meteorológicos";
    btmet.innerHTML="<b>🌤</b>";
    btmet.style.zIndex="1000";
    btmet.style.position="absolute";
    btmet.style.top="100px";
    btmet.style.left="10px";
    btmet.style.fontSize="16px";
	btmet.style.textAlign="center";
    btmet.style.width="35px";
    btmet.style.height="35px";
    btmet.style.background="Turquoise";
	btmet.style.border="0px solid black";
    btmet.style.borderRadius="5px";
    btmet.style.cursor="pointer";
    btmet.addEventListener("click", pintaMET);
    window.document.body.appendChild(btmet);
    //Create subtitle
    var mtxt = window.document.createElement("P");
    mtxt.id="mtxt";
    mtxt.innerHTML="<b>Carga</b>";
    mtxt.title="Cargar datos meteorológicos";
    mtxt.style.zIndex="1000";
    mtxt.style.position="absolute";
    mtxt.style.top="130px";
    mtxt.style.left="7px";
    mtxt.style.fontSize="10px";
	mtxt.style.textAlign="center";
    mtxt.style.width="40px";
    mtxt.style.height="15px";
    mtxt.style.background="DarkOrange";
	mtxt.style.border="0px solid black";
    mtxt.style.borderRadius="2px";
    mtxt.style.cursor="context-menu";
    mtxt.style.fontFamily="Arial";
    window.document.body.appendChild(mtxt);
    //Create key
    for (var i=1; i<=6;i++){
        var mkey = window.document.createElement("P");
        mkey.id="mkey"+i;
        mkey.innerHTML="<b>"+i+"</b>";
        mkey.style.zIndex="1000";
        mkey.style.position="absolute";
        var pos = 140+i*16;
        pos = pos+"px";
        mkey.style.top=pos;
        mkey.style.left="7px";
        mkey.style.fontSize="10px";
	    mkey.style.textAlign="center";
	    mkey.style.textIndent="0px";
        mkey.style.width="40px";
        mkey.style.height="15px";
        mkey.style.background="DarkOrange";
        mkey.style.opacity="0.75";
	    mkey.style.border="0px solid black";
        mkey.style.borderRadius="2px";
        mkey.style.cursor="context-menu";
        mkey.style.fontFamily="Arial";
        mkey.style.fontWeight="bold";
        mkey.style.color="black";
        mkey.style.display="none";
        window.document.body.appendChild(mkey);
    }
}

//Create loading message
function loader(){
    var ldmet = window.document.createElement("P");
    ldmet.id="ldmet";
    ldmet.innerHTML="CARGANDO DATOS METEOROLÓGICOS";
    ldmet.style.zIndex="1000";
    ldmet.style.position="relative";
    ldmet.style.top="-50vh";
    ldmet.style.width="250px";
	ldmet.style.margin="auto";
	ldmet.style.textAlign="center";
    ldmet.style.background="DarkOrange";
    ldmet.style.fontWeight="bold";
    ldmet.style.fontSize="11px";
    ldmet.style.padding="4px 4px 4px 4px";
    ldmet.style.fontFamily="Arial";
    window.document.body.appendChild(ldmet);
}


function getAEMET(){
    //show loading message
    loader();
    //http GET request to data
    var data = null;

    var xhr = new XMLHttpRequest();
    var xhd = new XMLHttpRequest();
    xhr.withCredentials = true;
    xhd.withCredentials = true;

    xhr.onload= function () {
        var consulta= JSON.parse(this.responseText);
        xhd.open("GET", consulta.datos);
        xhd.send();
    }
    xhd.onload= function () {
        data= JSON.parse(this.responseText);
        loadAEMET(data);
    }

    xhr.open("GET", "https://opendata.aemet.es/opendata/api/observacion/convencional/todas?api_key=eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJyb21hbmhkZXpnb3JyaW5AZ21haWwuY29tIiwianRpIjoiZmFiMTM1N2QtNTJhMC00ZWQ1LWFkNzYtNjY5YTAzNGI4YTFlIiwiaXNzIjoiQUVNRVQiLCJpYXQiOjE1NzAxMjM2MzIsInVzZXJJZCI6ImZhYjEzNTdkLTUyYTAtNGVkNS1hZDc2LTY2OWEwMzRiOGExZSIsInJvbGUiOiIifQ.-7vQF_TJLghx3g4t3GiHzlWt52LpMChqqtfNUhW07LQ");
    xhr.setRequestHeader("cache-control", "no-cache");
    xhr.send(data);
}

function loadAEMET(d){
    var obj=d.length;

    //loop to plot stations (s) as circles using station coordinate data
    for (var s=0; s < obj; s++){
        sta[s]=window.L.circle([d[s].lat, d[s].lon],{radius: 5000, weight: 0, opcaity: 0.0, fillColor: "RoyalBlue", fillOpacity: 0.01});
        //add name data
        sta[s].title=d[s].ubi;
        //add data values (temp, wind speed and rain)
        sta[s].temp=d[s].ta;
        sta[s].viento=d[s].vmax;
        sta[s].lluvia=d[s].prec;

        var prop="<p><b>"+d[s].ubi+"</b></p>";
        prop=prop+"<p>"+d[s].fint+"</p>";
        prop=prop+"<p><b>Temperatura: </b>"+sta[s].temp+" ºC</p>";
        prop=prop+"<p><b>Viento: </b>"+sta[s].viento+" km/h</p>";
        prop=prop+"<p><b>Lluvia: </b>"+sta[s].lluvia+" mm</p>";
        sta[s].bindPopup(prop);
    }
    //remove loading message
    window.document.getElementById("ldmet").style.display="none";
    //update stations visible in view
    colocaEnLaVista();
}

function colocaEnLaVista(){
    //Plot only stations visible in view bounds
    var vista=o.getBounds();
    for (var s in sta){
    sta[s].setStyle({fillOpacity: 0.01,fill: 0.01});
        if(vista.contains(sta[s].getLatLng())){
            var cobertura=o.distance(o.getBounds().getNorthEast(),o.getBounds().getSouthEast());
            sta[s].setRadius(cobertura/30);
            sta[s].addTo(map);
        }else if(sta[s]){
            sta[s].remove();
        }
    //edit stations display color according to data value
    switch(pinta){
    case "viento":
        if (sta[s].viento==null || sta[s].viento<0){
            sta[s].setStyle({fillOpacity: 0.0,fill: 0.0, cursor: "none"});
        }else if (sta[s].viento>=0 && sta[s].viento<5){
            sta[s].setStyle({fillColor: "green"});
        }else if (sta[s].viento>=5 && sta[s].viento<10){
            sta[s].setStyle({fillColor: "yellow"});
        }else if (sta[s].viento>=10 && sta[s].viento<20){
            sta[s].setStyle({fillColor: "orange"});
        }else if (sta[s].viento>=20 && sta[s].viento<30){
            sta[s].setStyle({fillColor: "red"});
        }else if (sta[s].viento>=30 && sta[s].viento<500){
            sta[s].setStyle({fillColor: "purple"});
        }
    break;
    case "lluvia":
        if (sta[s].lluvia==null || sta[s].lluvia<=0){
            sta[s].setStyle({fillOpacity: 0.0, fill: 0.0, cursor: "none"});
        }else if (sta[s].lluvia>0 && sta[s].lluvia<2){
            sta[s].setStyle({fillColor: "LightCyan"});
        }else if (sta[s].lluvia>=2 && sta[s].lluvia<5){
            sta[s].setStyle({fillColor: "LightSkyBlue"});
        }else if (sta[s].lluvia>=5 && sta[s].lluvia<10){
            sta[s].setStyle({fillColor: "SkyBlue"});
        }else if (sta[s].lluvia>=10 && sta[s].lluvia<20){
            sta[s].setStyle({fillColor: "DodgerBlue"});
        }else if (sta[s].lluvia>=20 && sta[s].lluvia<30){
            sta[s].setStyle({fillColor: "Blue"});
        }else if (sta[s].lluvia>=30 && sta[s].lluvia<500){
            sta[s].setStyle({fillColor: "DarkBlue"});
        }
    break;
    case "temp":
        if (sta[s].temp==null || sta[s].temp<=0){            
            sta[s].setStyle({fillOpacity: 0.0, fill: 0.0, cursor: "none"});
        }else if (sta[s].temp>0 && sta[s].temp<5){
            sta[s].setStyle({fillColor: "LightCyan"});
        }else if (sta[s].temp>=5 && sta[s].temp<10){
            sta[s].setStyle({fillColor: "DodgerBlue"});
        }else if (sta[s].temp>=10 && sta[s].temp<20){
            sta[s].setStyle({fillColor: "SpringGreen"});
        }else if (sta[s].temp>=20 && sta[s].temp<30){
            sta[s].setStyle({fillColor: "Gold"});
        }else if (sta[s].temp>=30 && sta[s].temp<100){
            sta[s].setStyle({fillColor: "DarkOrange"});
        }
    break;
    case "cross":     
            sta[s].remove();
    break;
    }
    }
}

function pintaMET(){
    //only download data at start
	if(ini==0){
    getAEMET();
    ini=1;
    }
    //empty legend
    for (var i=1; i<=6;i++){
        window.document.getElementById("mkey"+i).style.display= "none";
        window.document.getElementById("mkey"+i).style.color="black";
        }
    //generate new legend for each type of data
    //(using the current icon to define the next key)
    if (pinta=="cross"){
        pinta="temp";
        window.document.getElementById("btmet").innerHTML="🌡";
    	window.document.getElementById("btmet").title="Temperatura (ºC)";
        window.document.getElementById("mtxt").innerHTML="<b>T (ºC)</b>";
        window.document.getElementById("mtxt").title="Temperatura (ºC)";
        //set key
        mkey1.style.display="block";
        mkey1.style.background="LightCyan";
        mkey1.innerHTML="0 - 5";
        mkey2.style.display="block";
        mkey2.style.background="DodgerBlue";
        mkey2.innerHTML="5 - 10";
        mkey3.style.display="block";
        mkey3.style.background="SpringGreen";
        mkey3.innerHTML="10 - 20";
        mkey4.style.display="block";
        mkey4.style.background="Gold";
        mkey4.innerHTML="20 - 30";
        mkey5.style.display="block";
        mkey5.style.background="DarkOrange";
        mkey5.innerHTML="> 30";
    }else if(pinta=="temp"){
        pinta="viento";
        window.document.getElementById("btmet").innerHTML="🌪";
    	window.document.getElementById("btmet").title="Viento (Km/h)";
        window.document.getElementById("mtxt").innerHTML="<b>V(km/h)</b>";
        window.document.getElementById("mtxt").title="Viento (Km/h)";
        //set key
        mkey1.style.display="block";
        mkey1.style.background="green";
        mkey1.innerHTML="0 - 5";
        mkey2.style.display="block";
        mkey2.style.background="yellow";
        mkey2.innerHTML="5 - 10";
        mkey3.style.display="block";
        mkey3.style.background="orange";
        mkey3.innerHTML="10 - 20";
        mkey4.style.display="block";
        mkey4.style.background="red";
        mkey4.innerHTML="20 - 30";
        mkey4.style.color="white";
        mkey5.style.display="block";
        mkey5.style.background="purple";
        mkey5.innerHTML="> 30";
        mkey5.style.color="white";
    }else if(pinta=="lluvia"){
        pinta="cross";
        window.document.getElementById("btmet").innerHTML="❌";
    	window.document.getElementById("btmet").title="Sin clima";
        window.document.getElementById("mtxt").innerHTML="<b>-</b>";
        window.document.getElementById("mtxt").title="Sin clima";
    }else if(pinta=="viento"){
        pinta="lluvia";
        window.document.getElementById("btmet").innerHTML="☔";
    	window.document.getElementById("btmet").title="Precipitación (l/m²)";
        window.document.getElementById("mtxt").innerHTML="<b>P (mm)</b>";
        window.document.getElementById("mtxt").title="Precipitación (l/m²)";
        //set key
        mkey1.style.display="block";
        mkey1.style.background="LightCyan";
        mkey1.innerHTML="0 - 2";
        mkey2.style.display="block";
        mkey2.style.background="LightSkyBlue";
        mkey2.innerHTML="2 - 5";
        mkey3.style.display="block";
        mkey3.style.background="SkyBlue";
        mkey3.innerHTML="5 - 10";
        mkey4.style.display="block";
        mkey4.style.background="DodgerBlue";
        mkey4.innerHTML="10 - 20";
        mkey5.style.display="block";
        mkey5.style.background="Blue";
        mkey5.innerHTML="20 - 30";
        mkey5.style.color="white";
        mkey6.style.display="block";
        mkey6.style.background="DarkBlue";
        mkey6.innerHTML="> 30";
        mkey6.style.color="white";
    }
    //update stations visible in view
    colocaEnLaVista();
}


</script>

</html>

Also, you can directly download the leafMET.html from my github and open in a browser in your phone.

Was the post useful? Do you have doubts or comments? Come by Twitter and let me know! See you next time!

🐦 @RoamingWorkshop