Using WebGL and GLSL shaders the speed of online image processing in a web browser can be as fast as using an offline application. This is because the GLSL code is running directly on the graphics hardware and can benefit from the parallel computing power of hundreds (or thousands) of GPU shader cores.

The ImageShader plugin node executes user-provided GLSL code and thereby allows creating a customized compute node for node-based image processing and compositing within the GSN Composer.

To this end, an online GLSL editor and validator is provided that is similar to other web-based GLSL tools, such as: ShaderToy, GLSL Sandbox, The Book of Shaders Editor, Editor, Kick.js Shader Editor Shdr, ShaderFrog, Firefox WebGL Shader Editor, etc.

The main difference is that the GSN Composer is a node-based visual programming environment, which makes it very simple and convenient to provide the inputs (i.e., the uniform variables) for the custom image shaders. For every uniform variable that is created within the custom GLSL shader code, an input slot is added to the ImageShader node, which can be connected to other nodes of the dataflow graph. This makes shader development very fast and intuitive and frees the developer of writing many lines of support code to fill the uniform variables with values.

If the ImageShader node is selected in the graph area, the Edit Code button in the Nodes panel can be clicked. A dialog appears in which GLSL code can be entered.


The shader code is run for each pixel of the output image. The Red-Green-Blue-Alpha (RGBA) color value for each pixel is set dependent on the computed value for the build-in variable gl_FragColor, where 0.0 corresponds to full black and 1.0 is the full intensity.

To get started with GLSL programming, let's have a look at a very simple image shader that sets all pixels to red:

void main() {
  gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);  // red=1.0, green=0.0, blue=0.0, alpha=1.0

Using texture coordinates

An important variable that tells the image shader, for which pixel location it is executed is the 2-vector tc, which contains the current texture coordinates of the processed pixel in range [0.0, 1.0]. In the example below, the x- and y-texture-coordinates are used to set the intensity for the red and green channel, respectively.

precision mediump float;
varying vec2 tc;  // texture coordinate of the output image in range [0.0, 1.0]

void main() {
  gl_FragColor = vec4(tc.x, tc.y, 0.0, 1.0);
Shader output

The lower left corner of the image represents the origin of the texture coordinate system with coordinates tc = (0.0, 0.0). The top right corner of the image is at tc = (1.0, 1.0).

Converting texture coordinates to pixel coordinates

Texture coordinates can be converted to (and from) pixel coordinates using the two functions shown below. To compute the size of one pixel in texture coordinates, 1.0 must be divided by the number of pixels in that dimension. The additional offset of 0.5 pixel is required, because the center of the first pixel (with index 0) is located at half a pixel's size from the border of the image (with texture coordinate 0.0):
// pixel to textureCoord
float p2t(in float p, in int noOfPixels) {  
  return (p + 0.5) / float(noOfPixels);

// textureCoord to pixel
float t2p(in float t, in int noOfPixels) {
  return t * float(noOfPixels) - 0.5;

In order to use these functions, we need to know the width and height of the output image. These values are passed to the image shader via the width and height uniform variables and corresponding node input slots. In the example below, we draw only a single pixel at index (3, 2) black.

precision mediump float;
varying vec2 tc; // texture coordinate of the output image in range [0.0, 1.0]
uniform int width;
uniform int height;

// textureCoord to pixel
float t2p(in float t, in int noOfPixels) {
  return t * float(noOfPixels) - 0.5;

// round to nearest integer
int round(in float val) {
  return int(val + 0.5);
void main() {
  float px = t2p(tc.x, width);
  float py = t2p(tc.y, height);
  if(round(px) == 3 && round(py) == 2) { // if pixel index is (3,2)
    gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); // black
    gl_FragColor = vec4(tc.x, tc.y, 0.0, 1.0);

Uniform variables

For each uniform variable in the GLSL code, a corresponding slot with the same name is created at the input of the ImageShader node. The following table enlists the supported GLSL uniform types and the matching GSN data nodes that can be connected to the corresponding slot.

GLSL uniform typeGSN data node
uniform intPublicParameter.Data.Integer
uniform floatPublicParameter.Data.Float
uniform boolPublicParameter.Data.Boolean
uniform vec2Matrix.Data.Matrix
uniform vec3Matrix.Data.Matrix
uniform vec4PublicParameter.Data.Color
uniform mat4Matrix.Data.Matrix
uniform sampler2DImageProcessing.Data.Image

The only exception occurs if a variable of type "uniform sampler2D" is created and another variable of type "uniform int" that starts with same variable name plus "Width" or "Height". In this case, the integer variable is not exposed as an input slot. Instead the width and height properties are gathered from the image.

A description and a default value can be set within a comment in the same line after the definition of a uniform variable:

uniform float blue; // description="The value of the blue channel" defaultval="1.0"
uniform vec4 col; // description="An input color" defaultval="1.0, 0.0, 1.0, 1.0"
uniform sampler2D img; // description="An input image"
uniform int imgWidth; // not exposed as slot but gathered from "img"
uniform int imgHeight; // not exposed as slot but gathered from "img"

Texture parameters

For a uniform variable of type "uniform sampler2D" texture parameters can be selected using name-value pairs in the comment after its definition. This includes magnification and minification filters and wrap parameters. The following table enlists the supported options:
Texture Parameter NamePossible Values
mag_filterNEAREST (default)
LINEAR_MIPMAP_NEAREST (default if supported by browser)
wrap_sCLAMP_TO_EDGE (default)
wrap_tCLAMP_TO_EDGE (default)

As an example, the following code multiplies the texture coordinates by 2.0 such that they are afterwards in range [0.0, 2.0] and the wrap parameter of the input texture "inputImage" becomes relevant:

precision mediump float;
varying vec2 tc; // texture coordinate of the output image in range [0.0, 1.0]
uniform sampler2D inputImage; // wrap_s="REPEAT" wrap_t="REPEAT"

void main() {
  gl_FragColor = texture2D(inputImage, 2.0 * tc);

Accessing the mouse position

In this example, the mouse position is used to create a deformation field that is applied to the input image

precision mediump float;
const float PI = 3.14159265359;
varying vec2 tc; // texture coordinate of the output image in range [0.0, 1.0]
uniform sampler2D inputImage;
uniform float mouseX;  // description="The mouse x-position in range [0.0, 1.0]"
uniform float mouseY;  // description="The mouse y-position in range [0.0, 1.0]"
uniform float effectSize; // description="Size of effect" defaultval="1.0"

void main() {
  vec2 offset = tc - vec2(mouseX, 1.0 - mouseY);
  float dist = length(offset);  // distance of current pixel to mouse
  float scaledDist = 8.0 / effectSize * dist; // scaled distance
  float weight =  (scaledDist > PI)? 0.0 : sin(scaledDist); // compute weighting 
  vec4 color = texture2D(inputImage, tc - 0.25 * offset * weight);
  gl_FragColor = color;

Example: Julia Set Fractal

Few lines of shader code can sometimes generate very interesting output. In this example, the Julia set fractal is computed:

precision mediump float;
varying vec2 tc; // texture coordinate of the output image in range [0.0, 1.0]

const int maxIter = 50;
const float iterStop = 5.0;
uniform float cx; // description="Seed X" defaultval="0.345"
uniform float cy; // description="Seed Y" defaultval="0.065"
uniform float scaleX; // description="Scale in X direction" defaultval="3.0" 
uniform float scaleY; // description="Scale in Y direction" defaultval="3.0" 
uniform float offsetX; // description="Offset in X direction" defaultval="0.0" 
uniform float offsetY; // description="Offset in Y direction" defaultval="0.0" 

void main() {
  // scale or shift output
  float zx = scaleX * (tc.x - 0.5 + offsetX);
  float zy = scaleY * (tc.y - 0.5 + offsetY);
  // compute mandelbrot set
  int ii;
  for(int i=0; i < maxIter; i++) {
    ii = i;
    float x = zx * zx - zy * zy + cx;
    float y = zy * zx + zx * zy + cy;
    if((x * x + y * y) > iterStop) break;
    zx = x;
    zy = y;
  // number of iterations determines grayscale value
  float gray = 0.0;
  if(ii < maxIter) gray = float(ii) / float(maxIter);
  gl_FragColor = vec4(gray, gray, gray, 1.0);

Example: Analyse Sound

An image shader can also react to the currently played sound. To this end, the current spectrum and wavefront data can be passed to the shader as images:


ImageShaders could also produce 3D effects, but all 3D information must be generated procedurally inside the shader or must be extracted from input textures. Often it is easier to split the task into a vertex and a fragment shader. To this end, the GLSL Shader Plugin is provided that allows using the complete WebGL shader processing pipeline.

Please use the contact form or visit the forum on Reddit if you have questions or suggestions for improvement.