Reflecting a coordinate in pygame - python

Ive been working on a platformer with a specific mechanic that allows the player to "mirror" itself onto the other side. Think of this:
The circle when E is pressed would go from here
To here afterwards, the exact same position from the splitting platform but in this case mirrored.
Therefore, trying to implement this, I came up with an equation that should factor in a platform I set up that splits the two areas. and teleport the player to the exact mirror position based on screen size.
To mirror the position from the top, I wrote this, it takes the screen height (523 to be exact) and subtracts it from the height of the two sections (260 pixels) subtracted by the position of the player.
player.pos.y = screenSize - (260 - player.pos.y)
And to mirror on the bottom half, this function which just subtracts the position by the height of the sections plus 3.
player.pos.y = player.pos.y - 263
The only issue is that when on the ground or in the air, it teleports you to a completely incorrect area nowhere near where you once were. Think around basically the edge of the screen, this also occurs when teleporting from the bottom section.
Because of the odd method coordinates work in pygame, I cant use standard reflection methods used in geometry, is there a method to do a reflection with a coordinate system like pygames?

The problem is that you're doing this with reference to the entire screen, rather than the mirror: you've reflected across the center line, regardless of the mirror's position.
You want the new position to be as far from the mirror as the old, but in the opposite direction. Thus, we get this derivation, solving for the new coordinate:
new - mirror = mirror - old
new = 2 * mirror - old
See how that works for you.
Substituting your variables , and assuming a mirror object:
player.pos.y = 2 * mirror.pos.y - player.pos.y


Make mouse position relative to camera zoom level?

I just got camera zoom with OpenGL up and running in my little Pyglet game, but now I'm facing a problem: When I zoom in or out, the game objects' hitboxes won't obviously scale, so the game doesn't respond to mouse events correctly. Altering thousands of objects' properties might just be a bit slow, so I was wondering if I could modify the mouse's position instead. I just have no idea how. Zooming is done by glOrtho(), with multiplying the parameters.
Zooming code (self.dx and self.dy are the total movement of the camera so far, and self.zoom is a multiplier from 0.1 to 2): / (2 * self.zoom), screen.width / (2 * self.zoom), -screen.height / (2 * self.zoom), screen.height / (2 * self.zoom), -1, 1) - screen.width / 2, self.dy - screen.height / 2, 0)
What about reversing the zooming calculations for the mouse coordinates?
Edit 2
The way I'm handling mouse collisions with game objects is at least notorious. I'm actually using pygame.Rect objects to represent object's position, and then colliding it with mouse's position. It has worked great so far, since I haven't done any zooming until now. Maybe there's a way that suits better the OpenGL/3D world?
If you are using pygame.Rect you're specifying a point on the screen, not a point in the 3d world.
What you should have is:
A base class "enemy".
Any "enemy" must have two opengl triangles, a texture and a "hitbox" object.
A hitbox object specifies two opposite corners in 2D space. It is assumed that the 3d coordinate is the same as the two triangles. You can choose the triangles to determine the hitbox, in that case no need for an actual object, just forget it and override the access functions.
When you shoot, determine a 3d vector of the direction of the shot. Divide the screen in two, left/right and determine where it was shot. Then same with down/up. Then browse through your targets. Check for visibility 1st, if visible, check for side of screen, if right side, check if projectile would hit. Then get a collection of all those it would hit and only hit the 1st one.
Illustrated above is one of the many versions of how this may be done. It's one I quickly came up with. I hope that looking at it you will realize what is wrong (probably the fact that the hitboxes have nothing to do at all with your actual enemies) and be capable of fixing it based on the example.

Pygame Top-Down Scrolling [closed]

For a project I want to do for my class' Pygame final, I want to make The Legend of Zelda: A Link To The Past. However right when I started I realized that the scrolling would be quite an issue.
How do you implement a scrolling technique that follows the player until the edge of a map or image but still allows the player to move closer to the edge?
A reference because I feel as if I am not correctly wording myself:
My personal idea was to use a switch that switches the background image moving to Link's image moving.
A major component of any branch of engineering is breaking down big problems into smaller ones. So let's break down your problem.
How would you design a scrolling technique that follows the player
until the edge of a map or image but still allows the player to move
closer to the edge?
Okay, so there are three problems here. Moving the player sprite, moving the background sprite, and working out when to do each. Moving the player sprite is pretty straight forward - give it an (x,y) coordinate on the screen and move that according to the input controls (keyboard/mouse/etc).
Now let's consider the background sprite. For simplicity we'll assume that your whole background can be loaded as one big sprite. We want to render a portion of that background onto the screen - so we need to maintain the position of the background relative to the screen with it's own coordinates.
You can think about this two ways - either the screen stays stationary and the background moves behind it, or the background stays and the screen moves. Given that you'll eventually be tracking lots of other items (baddies, treasure, etc) and their position on the map, I would suggest thinking about everything moving relative to the background (even though this may seem less intuitive at first). Let's call this the world coordinate. To render things to the screen we'll need to work out their screen coordinate.
Okay, so we now have two coordinates - the positions of the screen and the player. For consistency, let's make the player position use world coordinates too.
So how do we render this to the screen? Start by listing out the rules:
the background should always fill the screen (i.e. don't scroll so far
that you can see outside of the background sprite)
the player should be centred on screen, except when that would violate #1
So the position of the screen is dependent on the player, but with some limits depending on where it is on the map. Let's consider the x coordinate (note this is untested):
# start by centring the screen on the player
screen_x = player_x - screen_width/2
# limit the screen to within the bounds of the background
if screen_x < 0:
screen_x = 0
if screen_x > (background_width - screen_width):
screen_x = (background_width - screen_width)
You can now calculate the render position of the player (position on screen) by subtracting screen_x from player_x. The background render position is calculated the same way (but should result in a negative coordinate).

Position in box2d and sprite

I have two type of position in cocos2dx and box2d
There are
CCSprite* parent = CCSprite::create("parent.png");
parent->setPosition(ccp(100, 100));
b2BodyDef bodyDef;
bodyDef.type = b2_dynamicBody;
bodyDef.position.Set(self.position.x / PTM_RATIO,self.position.y / PTM_RATIO);
What is the difference between these two position
PTM_Ratio have the values 32 or 40.
What is PTM_RATIO and its values?
In box2d you use metres as value of lenght, in cocos2d-x you use pixels or points. PTM is pixels to metres. If PTM_RATIO is 32 it mean that 32 pixels in cocos2d-x is 1 meter in box2d.
There are two coordinate systems at work here:
The box2d coordinate system, which is in meters. All bodies have a position in meters.
The screen coordinate system, which is in pixels. When you want to display a representation of the body (e.g. your sprite) on the screen, you have to use the sprite's setPosition method to place it in pixels.
The PTM ratio is the scale value of pixels/meters that you use to go between the two coordinate systems. Using a straight scale ratio puts the origin of the two coordinate systems right on top of each other. So the position on the screen is just a scale multiple of the position in the box2d world. This means that you won't see bodies with negative coordinates in general (unless part of their sprite sticks over the left edge of the screen).
When you go back and forth between the Box2d world and the screen world, you can use the PTM_RATIO to change the position in one to the position in the other.
Typically, you would set this up as follows:
Create your bodies in positions in the box2d world with a position in just meters.
Create your sprites in the screen system by setting the position based on the body position
For example:
for(all bodies)
Vec2 pos = body->GetPosition();
Apply forces to your bodies and update the physics engine each update(dt) of your application. I suggest using a fixed time step (e.g. 1.0/30.0), but this is a different topic.
Each update(dt) call, update the position of your sprites using the same loop above.
When the user interacts with the screen, you have to convert the touch position to the world position (by dividing by the PTM_RATIO) if you want to find the nearest body, etc.
I have a blog post that talks about this in more detail and shows how to build a "viewport". This allows you to look at a small part of the box2d world and move around the view, scale the objects in it, etc. The post (and source code and video) are located here.
Was this helpful?
The PTM_RATIO is 32 in box2d convention

Problem About 3D Projection?

I've Been Trying To Make A 3d renderer software (just trying for learning purposes) so when i read this article :
I Get Confused With The Part About ( e ) which is the viewer's position relative to the display surface , and i dont understand what does that mean or how can i calculate it, so please help and tell me the diffrence between it and camera position
Thanks In Advance,
Omar Emad Eldin
If you'll forgive me opening with a copy and paste, e is the viewer's position relative to the display surface. So in the case of computer graphics it's the vector from a defined point on the screen (the centre of projection, most usefully) to the person looking at the screen (who we're pretending is a single point).
You normally can't calculate it, because even if you assume you have only one person looking at the screen, you probably know where they're sitting. Sometimes you can track eyes through a webcam or something else like that but usually you can't.
Once you have a point (x, y, z) relative to the camera, most libraries just do the following calculation to work out where to put the point in screen space:
x' = (half width of viewport) * x / z
y' = (half height of viewport) * y / z
Which assumes the viewer is positioned centrally and one unit back from the screen, given that the position in camera space has already been scaled to apply some given horizontal and vertical field of view. I'm also taking the origin to be in the centre of the screen.

Moving sprites between tiles in an Isometric world

I'm looking for information on how to move (and animate) 2D sprites across an isometric game world, but have their movement animated smoothly as the travel from tile to tile, as opposed to having them jump from the confines of one tile, to the confines of the next.
An example of this would be in the Transport Tycoon Game, where the trains and carriages are often half in one tile and half in the other.
Drawing the sprites in the right place isn't too difficult. The projection formula are:
screen_x = sprite_x - sprite_y
screen_y = (sprite_x + sprite_y) / 2 + sprite_z
sprite_x and sprite_y are fixed point values (or floating point if you want). Usually, the precision of the fixed point is the number of pixels on a tile - so if your tile graphic was 32x16 (a projected 32x32 square) you would have 5 bits of precision, i.e. 1/32th of a tile.
The really hard part is to sort the sprites into an order that renders correctly. If you use OpenGL for drawing, you can use a z-buffer to make this really easy. Using GDI, DirectX, etc, it is really hard. Transport Tycoon doesn't correctly render the sprites in all instances. The original Transport Tycoon had the most horrendous rendering engine you've ever seen. It implemented the three zoom levels are three instanciations of a massive masm macro. TT was written entirely in assembler. I know, because I ported it to the Mac many years ago (and did a cool version for the PS1 dev kit as well, it needed 6Mb though).
P.S. One of the small bungalow graphics in the game was based on the house Chris Sawyer was living in at the time. We were tempted to add a Ferrari parked in the driveway for the Mac version as that was the car he bought with the royalties.
Look up how to do linear interpolation (it's a pretty simple formula). You can then use this to parameterise the transition on a single [0, 1] range. You then simply have a state in your sprites to store the facts:
That they are moving
Start and end points
Start and end times (or start time and duration
and then each frame you can draw it in the correct position using an interpolation from the start point to the end point. Once you have exceeded the duration, the sprite then gets updated to be not-moving and positioned in the end point/tile.
Why are you thinking it'll jump from tile to tile? You can position your sprite at any x,y co-ordinate.
First create your background screen buffer and then place your sprites on top of it.