26.09.2020 16:19

# GPU-side tiled rendering

The article presents a rendering algorithm for tile images with GPU (Graphics Processing Unit). The algorithm is implemented in the GLSL shading language (OpenGL Shading Language).

Key words: tile-based graphics, tile image, rendering, GPU, GLSL.

Tile-based graphics is a type of image consisting of small pieces (typically of uniform size) also referred to as tiles. A tile image is based on two components. They are the tile set, also known as “tile atlas”, and the tile map, where the information on tile positioning is stored.

To solve the above-mentioned type of task a CPU-side polygonal mesh calculation is usually applied. It can be performed in the following several steps:

1. First, you need to consider how many tiles there are within the visible area;

2. Next, you need to create a vertex buffer, where each vertex must contain the information not only about the position on the plain, but also about tile position in the texture atlas;

3. The final step is to generate an index buffer.

This is the classic approach not only to tile image rendering, but also to any task where texture mapping can be applied. Its main disadvantage is that most calculations are performed on CPU while GPU, which is more appropriate for multidimensional calculations, is idle.

This article presents an algorithm for tile image rendering which transfers most of the calculations to the side of the graphics adapter for GPU usage.

Let tileset be a tile atlas with utilesetwidth tiles horizontally and utilesetheight vertically. Each tile is tile_w*tile_h pixels, where tile_w is tile width and tile_h is the height.

Also, let tilemap be a grayscale image representing the tile map, u_tilemap_width the image width in pixels and utilemapheight the height. The intensity ip of a pixel p corresponds to the index in the tile- set atlas. Then (tilex, tile_y) position can be obtained in the following way:

ip = tile_y*u_tileset_width + tilex.

The coordinate system in OpenGL [3] is arranged in such a way that abscissa x changes from minus one to one within the visible area. The same holds for ordinate y.

Let us first of all calculate how many tiles can be placed within the visible area. This can be performed on CPU only. Note that all we need to know is how many tiles there can be on the screen. We do not need to care about their position and type unlike in a CPU-based algorithm.

u_image_width = windowwidth / tile_w; uimageheight = windowheight / tileh.

If a scaled image is needed uimagewidth and uimageheight can be multiplied or divided by some value and there is no need for a mesh recalculation unlike in a CPU-based algorithm.

Thus, a color definition algorithm for the fragment shader can be given:

1. Let v_Pos be a point on the plain for which the fragment shader should determine color. Firstly, the position of v_Pos should be displayed on the tilemap:

v_tilemap_pos.x = (vPos.x + 1) * u_image_width / u_tilemap_width / 2; v_tilemap_pos.y = (1 - vPos.y) * u image height / utilemapheight / 2.

Note that the abscissa is normalized by adding one and dividing by two, but the ordinate is also reversed. This is because texture position is measured from the top left corner while in OpenGL it is measured from the bottom left corner.

2. Now index i can be obtained from tilemap texture. Note that the color in the texture is normalized and should be multiplied by 256 (the number of different intensities).

3. At this step we can calculate v tileset_pos tile position in the tile set with the aforementioned formula:

i = vtileset_pos.y* u_tileset_width + vtileset_pos.x, and then normalize it:

vtileset_pos.x = vtileset_pos.x / u_tileset_width; vtileset_pos.y = vtileset_pos.y / utilesetheight.

4. Now we need to find the inner offset. Let us calculate the screen tile to texture tile size ratio. Note that if uimagewidth and u_image_height are calculated with the aforementioned formulas then this coefficient is always equal to one. Next, let us normalize the values with the usual procedure:

tilemaptilesize.x = 1/ u_tileset_width, tilemaptilesize.y = 1/ utilesetheight, screentilesize.x = 1/ u_image_width, screentilesize.y = 1/ uimageheight, and then calculate the ratio:

coeff = tilemaptilesize / screentilesize.

Finally, the inner offset can be calculated. To this end v_tilemap_pos should be displayed in the reverse way on the screen position, then the difference with normalized v_Pos will determine the offset:

voffset.x = (vPos.x + 1) / 2 - v_tilemap_pos.x * u_tilemap_width /u_image_width; v offset.y = (1 - v Pos.y) / 2 - v_tilemap_pos.y * u tilemap height / u image height.

5. Thus the desired color can be obtained from tileset texture us¬ing the position:

vtileset = v_Pos + v_offset*coeff

This algorithm was implemented in GLSL language. Variable and invariable names were taken unchanged from the algorithm definition. GLSL version 1.20 [1] was used to support the maximum number of devices possible. To use this algorithm on mobile devices the shader version should be changed to GLSLES 1.00 [2].

The vertex shader code listing is as follows:

#version 120 attribute vec2 a_Pos; varying vec2 v_Pos;

void main() { v_Pos = a_Pos;

gl Position = vec4(a_Pos, 0.0, 1.0);

}

Attribute a_Pos stores information about the position of the vertex being processed. Variable v_Pos will be passed to the fragment shader and will be used in the same way as in the algorithm. At the end of the vertex shader a new vertex position should be stored into the special variable glPosition [3]. This shader is suitable only for simple two-dimensional to four-dimensional vector transformations. The fragment shader code listing is as follows:

#version 120

uniform sampler2D tTileMap; uniform sampler2D tTileSet; uniform u_image_width; uniform u image height; uniform u_tileset_width; uniform utilesetheight; uniform u_tilemap_width; uniform utilemapheight; varying vec2 v_Pos;

void main() { vec2 v_tilemap_pos;

vec2 v_tilemap_pos.x = floor(((v_Pos.x +

1.0) *u_image_width)/u_tilemap_width/2.0)

vec2 v_tilemap_pos.y = floor(((1 - vPos.y) *u_image_height)/u_tilemap_height/2.0);

float i = texture2D(t_TileMap, v_tilemap_pos).r*256.0; vec2 v tileset_pos; vtileset_pos.x = floor(mod(i, u_tileset_width))/u_tileset_width;

vtileset_pos.y = (floor(i/u_tileset_width) +

1.0) /u_tileset_height;

vec2 tilemaptilesize = vec2(1.0/u_tileset_width, 1.0/u_tileset_height);

vec2 screentilesize = vec2(1.0/u_tileset_width,

1.0/u_image_height);

vec2 coeff = tilemaptilesize/screentilesize; vec2 v offset;

voffset.x = (vPos.x + 1) / 2.0 - v_tilemap_pos.x * u_tilemap_width /u_image_width;

voffset.y = (1 - vPos.y) / 2.0 - v_tilemap_pos.y * utilemapheight / uimageheight);

glFragColor = texture2D(t_TileSet, vtileset_pos + v_offset*coeff);

}

At the end of the procedure the desired color should be placed into the glFragColor variable [3].

This algorithm enables us to reduce the CPU-side calculation to two simple stages. All other rendering logic is performed on the GPU- side. This is useful because, firstly, GPU is more adapted for multidimensional calculations than CPU and, secondly, while GPU is rendering the image, CPU can be used for another, more suitable program logic.

Список использованной литературы

1. Kessenich J. The OpenGL® Shading Language Language Version: 1.20., 2006.

2. Kessenich J., Simpson R. J. The OpenGL® ES Shading Language Version 1.00., 2009.

3. Segal M., Akeley K. The OpenGL® Graphics System: A Specificatio Version 2.0., 2004.

Kuvaev A. E.

Опубликовано 26.09.2020 16:19 | Просмотров: 489 | Блог » RSS |