Quantcast
Channel: Pixi.js Latest Topics
Viewing all 3981 articles
Browse latest View live

How I can show loaded texture from WEBP file?

$
0
0

Hi! I have a problem with PIXI 5, I`m making a social game, and I need to load avatars. All file social network gives in WEBP format! How I can resolve this problem? Thank you.


Pixi.Text v5.1.1 doesn't work

$
0
0

Hi all
I use Pixi  5.1.1
I wanna learn how to use BitmapFont, but I have some promlem with it.
I use example from 

import desyrel from "./desyrel.xml";
 
/* other code */
 
app.loader
   .add('desyrel', desyrel)
   .load(onAssetsLoaded);
 
function onAssetsLoaded() {
   const bitmapFontText = new PIXI.BitmapText('bitmap fonts are supported!\nWoo yay!', { font: '55px Desyrel', align: 'left' });
 
   bitmapFontText.x = 50;
   bitmapFontText.y = 200;
 
   container.addChild(bitmapFontText);
}

 (font desyrel from examples too)

And I get this error
Error message: 

Uncaught Error: Texture Error: frame does not fit inside the base Texture dimensions: X: 1 + 38 = 39 > 1 and Y: 1 + 74 = 75 > 1
    at Texture.prototypeAccessors.frame.set (core.es.js:3280)
    at new Texture (core.es.js:2962)
    at Function.registerFont (text-bitmap.es.js:611)
    at Function.parse (text-bitmap.es.js:652)
    at completed (text-bitmap.es.js:736)
    at MiniSignal.dispatch (mini-signals.js:111)
    at resource-loader.esm.js:2257
    at next (resource-loader.esm.js:47)
    at Loader.use (spritesheet.es.js:307)
    at resource-loader.esm.js:2255
    at resource-loader.esm.js:55

Does someone know what it can be and how to fix it?

Media slideshow

$
0
0

Hello all,

Does anybody know if it is possible to detect when a video that is loaded has completed? Or even some basic detection and control (onComplete, onStart, play, stop). I am trying to create a media slideshow that supports images and mp4's. I have built a similar slideshow in the past with Timeline Max but it appears that there is no support for videos in the JS version. This is what I have so far (Just a quick prototype to build on).

 


let renderer;
let stage =  new PIXI.Container();
let button;
const loader = PIXI.loader;

renderer = new PIXI.CanvasRenderer(1024, 768);
document.body.appendChild(renderer.view);

load();


function load()
{
    loader.add('box', 'assets/yellowBox.png')
        .add('movie', 'assets/testVideo.mp4')
            .load((loader, resource) =>
            {
                onAssetsLoaded(resource);

            })


}


function onAssetsLoaded(resource)
{
    console.log('loaded');

    // Video stuff
    let videoData = resource['movie'].data;

    let videoBaseTexture = PIXI.VideoBaseTexture.fromVideo(videoData);

    let videoSprite = PIXI.Sprite.from(videoBaseTexture);
    stage.addChild(videoSprite);
}

 

 

Long RAF execution behavior in Chrome devtools

$
0
0

I'm getting a lot of inconsistencies in fps due to requestAnimationFrame calls randomly taking a long time where it doesn't look like there is anything being called inside of it until the tail end of the call. What it does show in the call tree only takes PIXI 1-3ms to flush or update transforms while the RAF call can last anywhere from 8-15ms. The devtools are also showing that the GPU and other threads aren't doing any irregular work when the long RAF call happens. It happens at least 10-20 times within a 30 second Chrome devtools performance recording. Has anyone else experienced this kind of thing before? 

https://imgur.com/a/8Uh8028

Spritesheet without TexturePacker

$
0
0

I have this image which is just a set of cards. I know you can crop a texture by putting a Rectangle object in its frame property, but you can only do that for a single texture object. How can I have multiple sprites that use different cards, using multiple Rectangle objects?

I know TexturePacker automatically generates a spritesheet that pixi.js can use, but that program is made to pack different images into a single image, and the image I have is already packed.

Background safe area scaling with window size

$
0
0

Hi guys, I need help with my game. 

I'm working on game/background resizing. Actually, I improved my game resize function studying this article (http://www.williammalone.com/articles/html5-game-scaling/, the second part doesnt work for me..), so now I have problem with scaling background to a safe area (Like yggdrasil gaming's slots, for example). Can someone help me?

Poor Performance on Ultrabook vs. Phone

$
0
0

Hi all, I have been working on a game. It's unoptimized in many respects, but I noticed something truly strange about its performance: 

On my Pixel 2 phone, the performance is acceptable (30-60 fps), but on my HP Spectre x360 ultrabook, the performance is a virtual slideshow, around 5 fps. The performance is better on my Nexus 5X, and on my powerful desktop PC it's fine as well. The slowness on the Ultrabook is irrespective of browser (though Chrome performs best).

The ultrabook has Intel HD Graphics 620. Other WebGL demos seem to run fine (pixi, 2D and 3D). It profiles much higher than my Pixel 2 for WebGL.

About the game:

It is a roguelike style RPG with tiled maps with a 1-2 sprites for each tile. Because of this I initially expected number of draw calls as the culprit, so I set all tiles that are not yet seen or are offscreen to `visible = false`. However, the ultrabook was still a slideshow.

Next I downloaded the extension WebGL Insight. I noticed that under Resources -> Buffers -> Buffer2, the Buffer is an array with 24,000 elements. I don't know the inner workings of WebGL, and so I don't know the exact significance of this array, but the length seemed suspicious.

Under Resources -> Textures, there are 205 textures.

In the Chrome profiler, 87% of total time (20% of self time) is used by `Function Call -> Animation Frame Fired`. Next is Composite Layers (6.6% self / 6.6% total). Then renderWebGL (5.3% self / 27.5% total). Then updateTransform (5.2% self / 5.8% total).

Does anyone have any ideas? Is there a precedent for these kinds of platform inconsistencies?

Keeping track of every sprite position in a tiling sprite

$
0
0

I made a container containing a bunch of sprites. Then I made a single texture out of it and made a tiling sprite. Now my question is, is it possible to still keep track of the positions of the sprites that are now in the tiling sprite? Because I need to animate the tiling sprite and still keep track of the position of the sprite. 


Pixi Text width doesn't include padding

$
0
0

Hi all
I use Pixi text with padding in style, and I noticed one thing

const textSample = new PIXI.Text("text", {
   fontFamily: 'Montserrat',
   fontSize: 50,
   padding: 50,
   fill: mainRed,
});

width of the text does't include padding, that's why I should do somthing like this for alignment text in container 

and it looks like this

container.pivot.x container.width / 2 - 50;
container.pivot.y container.height / 2 - 50;

Does someone know how to align it properly?
because I think it's not the best solution

Pixi.js and Graphics

$
0
0

Hi 
How can I fix artifacts in Pixi Graphics, when I try to draw triangle?

But when I try to draw text, it's ok
and this is my App settings
 

const app = new PIXI.Application({
    width: 600,
    height: 600,
    backgroundColor: 0x141019,
    antialias: true,
    resolution: window.devicePixelRatio,
});

eGsGnZi1bOzB01KS2IBAruxaB_TvLVA9Yy09zf2B

Best practice for using layers

$
0
0

I'd like to write a bomberman-like game with lace.gg using pixi.js v5 as the rendering engine.

I was wondering what the best practices are when it comes to layers. I assume it wouldn't make sense to create 1 layer per game element.

Would it make sense to put everything into 1 layer?
Or should I put the background  (which never changes) into one layer and the rest into another one?
Or should I maybe put all players (constantly moving game elements) into one layer and everything else into another layer?
Or maybe every type of game object should go into one layer? Players, bombs, items, invulnarable blocks, normal blocks, explosions, background ...

I'm also wondering if I should use layers as in:

        this.stage = new PIXI.Container();
        this.backgroundLayer = new PIXI.Container();
        this.mainLayer = new PIXI.Container();

        this.stage.addChild(this.backgroundLayer, this.mainLayer);

or if I should use the pixi.layers plugin?

Graphics artifacts

$
0
0

I've recently started playing with Pixi so I'm probably doing something very wrong here, but I encountered an issue when I'm drawing circles with PIXI.Graphics.

So what I attempted to do is to fill the screen with circles that don't overlap. Each circle starts with 1 radius and then grows every frame. Pretty simple. My problem starts at about ~500 circles.

 

It starts out ok..

But then this happens

Is drawing hundreds of circles with drawCircle() into a single Graphics object per frame bad? I don't care about performance at this time, just doing a simple circle packing exercise.

Simplified code below. No real reason to include growing and populating math here.  

I've tried putting begin Fill and line style inside the loop for every circle as well as completely outside the ticker. In the example below I have a separate loop doing math first and then another drawing circles but I tried them together too. 

I thought the long computation times for growing and populating each frame have something to do with my issue, but maybe it's a pixi/webgl limitation I have no clue about. Halp

interface Circle {
    x: number
    y: number
    r: number
}

const circles: Circle[] = [] //array filled with circle objects. 

//on then on every frame I do

public animate(delta: number) {
    for (const circle of circles) {
      growIfPossible() //simplified
    }
    populate() //adding new circles - simplified

    this.g.beginFill(0xFFFFFF)
    this.g.lineStyle(2, 0x121212)
    this.g.clear()
    for (const circle of this.circles) {
      this.g.drawCircle(circle.x, circle.y, circle.r)
    }
    this.g.endFill()
  }
}

 

Aliasing artifacts using DisplacementFilter

$
0
0

I've written a small parser in javascript to read Photoshop Liquify meshes so I can apply liquify warps created in Photoshop to images dynamically in a browser. I first wrote a crappy implementation using manual pixel displacement calculations in canvas. That worked in principle and resulted in images visually identical to what I got in Photoshop, but it was horribly slow. Then while googling displacement maps to figure out a way to speed up my code I stumbled across the PIXI demos and realized I could potentially get super fast performance using PIXI & WebGL.

The good news: the PIXI version is working and is several orders of magnitude faster than my canvas version. Astonishing!

The bad news: I'm getting pretty nasty aliasing artifacts in regions where the displacement is large. My canvas implementation also looked rough until I added bilinear interpolation, but I suspect the problem here is that the displacement vectors which are originally represented by 32 bit floats need to be translated into 8 bit integers in the PIXI displacement maps. I was hoping that interpolation could fix that in PIXI too, but maybe not.

Here's an imgur album that demonstrates the issue:  https://imgur.com/a/jiKo3bP

The test image is 800x800 pixels, and the maximum displacement of the warp is 371 pixels. The displacement of +/- 371 needs to be represented by an integer between 0 and 255, so every integer value of the displacement map corresponds to a jump of about 371*2/255 = 3 pixels. To me the aliasing artifacts look about 3 pixels wide, so maybe my suspicion is correct. (Another indication of this: the entire displaced image shifts up & left about 1.5 pixels even where there should be no displacement. The value 1.5 happens to be half of three, so maybe the zero displacement level is "halfway off center".)

So, my question: is there a way in PIXI to represent displacements more accurately? Maybe another method that doesn't rely on 8 bit color channels? If not, can these aliasing artifacts be reduced by adding or improving the interpolation?

Here are the relevant functions of the code I'm using:

// Translate a mesh object containing x & y coords of Photoshop's displacement vectors
// (displacement measured in pixels) to an 8 bit RGBA image with x & y displacements
// stored in red and green channels. Zero displacement has value 128. Scale so the
// 8 bits we have are used efficiently, i.e. the maximum displacement goes to 0 or 255.
function mesh2meshimage(mesh) {
    let absMax = 0;
    for (let pixelindex = 0; pixelindex < mesh.x.length; pixelindex++) {
        absMax = Math.max(absMax, Math.abs(mesh.x[pixelindex]), Math.abs(mesh.y[pixelindex]));
    }

    const meshimage = new Uint8ClampedArray(mesh.width * mesh.height * 4);

    for (let pixelindex = 0; pixelindex < mesh.x.length; pixelindex++) {
        const channelindex = pixelindex * 4;
        meshimage[channelindex + 0] = 128 + mesh.x[pixelindex]/absMax*127;     // R
        meshimage[channelindex + 1] = 128 + mesh.y[pixelindex]/absMax*127;     // G
        meshimage[channelindex + 2] = 128;      // B
        meshimage[channelindex + 3] = 255;      // A
    }

    const imagedata = new ImageData(meshimage, mesh.width, mesh.height);

    return {imagedata:imagedata, origwidth:mesh.origwidth, origheight:mesh.origheight, max:absMax};
}

// Now use PIXI to render the displaced image.
function liquify(meshimage) {
    const canvas = document.createElement('canvas');
    const context = canvas.getContext('2d');

    canvas.width = meshimage.imagedata.width;
    canvas.height = meshimage.imagedata.height;
    context.putImageData(meshimage.imagedata, 0, 0);

    const displacementSprite = PIXI.Sprite.from(canvas);

    const scalex = meshimage.origwidth/meshimage.imagedata.width;
    const scaley = meshimage.origheight/meshimage.imagedata.height;
    displacementSprite.scale.set(scalex, scaley);
    PX.app.stage.addChild(displacementSprite);

    const displacementFilter = new PIXI.filters.DisplacementFilter(displacementSprite);
    PX.sprite.filters = [displacementFilter];

    displacementFilter.scale.x = meshimage.max * scalex / 2;
    displacementFilter.scale.y = meshimage.max * scaley / 2;
}

 

Can DisplacementFilter be implemented using the vertex shader?

$
0
0

PIXI's DisplacementFilter works by using a (mostly) default vertex shader and does the actual displacement work in the fragment shader. In the fragment shader, it reads the x & y displacements from the red and green color channels of the displacement map. Then to determine what color should go at (x,y), it samples the color at (x + x_displacement, y + y_displacement).

I wonder if this can be flipped around so that the displacement work is done in the vertex shader by displacing vertices according to the displacement map. For this to work, I think the displacements would be stored at the source pixel instead of at the destination pixel, so displacement vectors go in the opposite direction.

I'm asking because in the application I'm developing I want to be able to extrapolate displacements outside the ordinary range. In the default DisplacementFilter I can vary displacementFilter.scale to change the warp smoothly from zero to full displacement. But I can't just plug in 150% to get an even larger displacement, or put -50% to go the other direction (because changing the scale only affects pixels that already get displaced). But if displacements were interpreted as vertex shifts, I believe I could do just that.

Is this feasible? Has anyone already experimented along these lines? I've begun sketching on some code, but it's not working. No errors, but also no displacement. I hesitate to share it because I honestly don't quite know what I'm doing yet when it comes to shaders. But oh well, here it is. Please go easy on me. :)

Vertex shader:

attribute vec2 aVertexPosition;

uniform mat3 projectionMatrix;
uniform mat3 filterMatrix;

varying vec2 vTextureCoord;

uniform vec4 inputSize;
uniform vec4 outputFrame;
uniform vec4 inputClamp;

uniform vec2 scale;
uniform mat2 rotation;
uniform sampler2D mapSampler;

vec2 filterTextureCoord(void)
{
    return aVertexPosition * (outputFrame.zw * inputSize.zw);
}

void main(void)
{
    vTextureCoord = filterTextureCoord();
    vec2 vFilterCoord = ( filterMatrix * vec3( vTextureCoord, 1.0)  ).xy;

    vec4 map = texture2D(mapSampler, vFilterCoord);

    map -= 0.5;
    map.xy = scale * inputSize.zw * (rotation * map.xy);

    vec2 position = aVertexPosition * max(outputFrame.zw, vec2(0.)) + outputFrame.xy;
    gl_Position = vec4((projectionMatrix * vec3(position + map.xy, 1.0)).xy, 0.0, 1.0);
}

 

Fragment shader (just the default):

varying vec2 vTextureCoord;

uniform sampler2D uSampler;

void main(void)
{
  gl_FragColor = texture2D(uSampler, vTextureCoord);
}

 

Map Type Labels

$
0
0

Is there a library or helper currently available to show map style labels coming off one area without overlapping each other?

Thank you


Using Matter.js for physics with Pixi.js

$
0
0

I'm looking to make a simple game using Pixi.js as a renderer and Matter.js for the physics. But I'm having a hard time wrapping my head around how to do that. Does anybody have some code or projects, using both pixi and matter, that I could take a look at? Any documentation/articles/videos are also welcome.

Much appreciated.

Copy Graphics content into another Graphics

$
0
0

Hi, is it possible to copy a Graphics content into another Graphics? I saw there is a clone method, but I don't want to create a new instance, I want to copy its contents into an existing Graphics.

 

Thanks.

PIXI V5 Boilerplate - ES6 / Object Oriented

Does pixi.js call gl.drawArrays for each sprite in the scene?

$
0
0

I am trying to learn how to render 2d sprites using webgl. I use this to render a single quad:

 

  let positions = [
    -1, 1,
    -1, -1,
    1, -1,
    -1, 1,
    1,-1,
    1, 1
  ];

  gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW);

 

 Now I want to render more sprites, do I have to setup positions like this for each sprite and call `gl.drawArrays`?

Using Pixi for procedural image drawing.

$
0
0

Hello! I'm writing a pixelArt game.

1) I'd like to use procedural image generation, i e using functions to create and modify images pixel by pixel on fly. Some examples: modifying images to create dynamic shadows on an object itself, grass reacting to player movement, characters calculated/ animated in vector, but then pixel clothes grows on the vector sceleton and displayed.

My Question is: Can Pixi do pixel manipulation? Is there smth I should know about it? Is Pixi - the best choice for such a task? Is it even gives any advantages, compared to HTML Canvas pixel manipulation, e g does it use GPU or smth?

2)I'd like to scale pixels freely, so two objects on scene can have pixels of different sizes. Is there a proper way to do it? I'm thinking of using some scale method on image or just drawing a lot of rectangles, with PIXI ParticleContainer or smth..

Also if someone thinks that my ideas are dumb for some reason, and I should prerender everything, please say so! 

Here's some probably irrelevant information: I plan to desktop the game, using electron.

Thanks for attention! 🎩

Viewing all 3981 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>