My favorites | Sign in
Project Home Downloads Wiki Issues Source
Design notes on the core FlowPaint architecture.
Phase-Design, Featured
Updated Feb 4, 2010 by

This is an overview of the central FlowPaint architectural elements.

Renderng Pipeline

A generic SampleData object is used to store named properties with float values. It's relatively optimized (uses an int to float native map class internally).

The rendering pipeline resembles that of 3D graphics a bit, first a vector part where StrokeFilers (vertex shaders) can operate on stroke points (vertexes), changing and storing properties in them, and then a rendering part where InkFilters (fragment shaders) can operate on pixel data, changing properties, and finally specific pixel properties are stored to the image. The difference is that in 3D graphics there is one vertex shader and one fragment shader, while in FlowPaint a brush specifies a list of StrokeFilters and a list of InkFilters, which are applied in sequence. This makes it very easy to customize and create new brushes, by combining filters of different kinds.


A stroke consists of a sequence of samples. Each sample is a SampleData object, which stores the coordinate of the point, along with other parameters such as pressure, radius, etc. When a parameter has the same value as the previous point, it is not stored. Thus only changes in values are stored in a stroke.

When a new stroke is started, first all hard coded parameters from a brush is stored in the first sample. Then all input widgets defined by the brush are gone through, and the parameter values they are set to read and stored in the first sample. Then the start location, pressure, and other input values are stored to the sample.

As the pen/mouse moves, new samples are added to the stroke. For each the position, pressure, tilt, and time since start of stroke are stored.

Each samplepoint on a stroke (including the first one) is run through a set of StrokeFilter:s defined by the used brush. The filters take one sample, and return zero or more samplepoints that are passed on to subsequent filters. The filters can read parameters of the sample, and change their values in the returned samples.

The end result is a sequence of samplepoints with parameters needed for the next stage, rendering.

Stroke Rendering

The rendering takes care of converting a stroke that is defined by a sequence of data samples to pixel values. Currently there is one kind of renderer, the SegmentRenderer. In the future a square renderer of some sort could be added, for rendering stamped pictures.


The segment renderer renders a stroke one segment at a time. A stroke segment is the part of a stroke between two sample points on the stroke.

Each segment needs to know the position, angle, and radius of the start and endpoint.

The SegmentRenderer loops through the pixels making up the stroke, and calculates the relative position along the segment (0..1), and across the segment (-1..1). It stores these in a DataSample specific to that pixel. It also interpolates the values of all properties defined by the start and endpoint of the segment, based on the relative position of the pixel between the start and end of the segment, and then stores those interpolated values in the DataSample for the pixel. After that it calls the PixelRenderer with that data sample, to calculate the final value that should be stored to the layer data.


The PixelRenderer calculates the final picture channel properties for a pixel (typically red, green, blue, and alpha, although things like bumpmap and normal map etc can be added later). It recieves a DataSample as input from the used stroke renderer, with the parameters for the pixel. It then calls the InkFilters specified by the used brush in sequence. Each InkFilter can read and modify the parameters in the DataSample.

In the end the pixel renderer extracts the values for the channels that are defined by the layer that is being rendered to, and updates the layer data at the specified location with the calculated values.

On Filters

The filters can store internal state. This is initialized for each stroke (new instances created?).

On scalability

The nature of the rendering in general is relatively parallelizable. For example, the stroke rendering could be done in several threads, so that e.g. one StrokeSegment would be the unit of work. This should allow utilizing the coming multi-core architecture processors effectively.

Sign in to add a comment
Powered by Google Project Hosting