Friday, 10 October 2025

XVE Game Engine: Thinking about AOE Damage Effects

I'm back on my home game engine and I need an area damage effect, which can apply to all the members of a team/party.... The trouble?  How to synchronize this over a network connection.

First of all, I know for certain that the server running my game world knows the center of the effect, and it knows the radius of the effect.  And it does tick the effect application against each player whether their client can visually see the effect - it appears in their combat log text immediately.

So the mechanics are there, the clients however, all run at different rates, they're in different locations world wide, with different latencies to the servers source of truth.... And so I need to now think about how I am going to tackle the issue of each player seeing an effect.

Simplicity first, the effect is just going to be represented by a wireframe sphereoid, it'll be a smooth shaded sparkles dripping effect later.

This quiets my engine engineer smooth brain some what and I now just need to synchronize the expansion from the center to the maximal radius of the effect, this will turn up at the clients from the server at any time after the effect is registered as affecting the players on the server... This maybe unfair, however, there is reason behind this in the design - the player will know the effect is going to be cast as there is pre-warning from the caster... They need to learn that and move before they see the effect basically.... That is, they need to be smart.... I know I know this is a cheap answer, but I am all about cheap answers to problems today - and in my gameplay loop, seeing the caster "spit" this effect and moving before it expands into a cloud of hell... is pretty rewarding.

So I have a time point, on all clients they know when the effect begins, they know the maximal extent and they know the center of it....

What I am thinking about now is the client side only, so without any network traffic, an entity definition for the effect which the client has pre-loaded into some dictionary of effects to execute, and this contains a rate per millisecond from the start time and the effect system has to linear interpret along the line.

I am thinking about designing some form of easing curve authoring for the client effect system - and I'll likely reuse this in the animation system - whereby I can try different easing curves.


 The system interpreting along the line could be more than a lerp too, again that'll be down the line.

Replaying, reversing or otherwise on such a defined effect is going to be quite interesting as reversing the delta will allow me to contract it again.... Though time-line ordering is to be forwards chronologically only for my game play, that maybe a feature I want to play with in future.

What's the problem then?  Yeah, it sounds like I have a handle on what I want, except.... I've assumed all the way through my thinking that the effect is fixed, known and defined up front.... The problem?

I want to scale the effect with the power of the caster, and this is not known up front, sure I can just add a float and multiply by 0.5 for half power or anywhere along that scope, heck I could even scaled the enemies power by some easing function as well and combing the power function along that line with the rate of the effect.... So there are answers to be had.

My problem is randomness, when it's random, things look wrong, they don't feel right when playing them and the rewarding feeling I had from the fixed function gameplay doesn't turn up... It's in some fort of metaphysical uncanny valley which I can't quite explain doesn't feel right.

I will have to play about some I feel.... 

Thursday, 9 October 2025

A Thought On Poles & Swingletrees

I was intrigued recently by a video from LindyBeige about Roman War Waggons, during which he brought up that some folks argue about how a horse might have been tacked to a wagon in that era.

Where by some etchings show only a single "pole" attaching a wagon to oxen or horses.

I looked at the image and wondered perhaps if it was a full length trace, which would be on the wagon frame, but he was totally right about the perspective and the image clearly being fanciful.

I today therefore was having a think about whether swingletrees would have been in use, they certainly are in ploughs and oxen drawn materials from the medieval period; I've read and seen etchings which come from contemporary sources.

My driving position has a single articulated pole, but this is not for the driving power, that is just for direction; and the swingletrees, to the horse traces, they are the power.... so there's a very intricate interplay between all of them to keep things in balance....

And to be honest.... I reckon in some form, this would have existed just a few years after starting to use horses, let alone a few dozen millennia as the turn to the common era would have been (year 0 by the Gregorian Calendar).


 

Tuesday, 7 October 2025

Over the Fence.... And Far Away....

No, I'm not talking about horses this time.... Lets talk engineering, what is a fence?  A memory fence or barrier is used to order reads and writes around certain operations called Atomics.

This ordering ensures that both communication of the value from the register in the processor is correct between all the cores/threads running and the cache within the chip as a minimum, and may even leave to a full flush to the main memory (though that is far longer in cycle).

These can take tens to hundreds of cycles to leap the value over the barrier.

In C++ the most common two, or certainly which I encounter most often, are fetch_add and fetch_sub, using them to control everything from the shared count on a resource to the control gating on fixed sized "high performance" containers.

And there in lies the rub, these operations cost a lot of cycles just to increment or decrement a counter in many of my use cases, so why use them?

Well, that barrier within the chip, between the memory is very expensive if we compare it simply with the very cheap increment or decrement of a register itself, just the value in the register on the chip can change in a single operation instruction; sure it took others to load the register and it'll take yet more to store the result off, just as it would with the atomic; but on top of that you have no overhead in comparison with the atomic....

Until... Until you try to synchronize that flat increment or decrment, sure then that code is going to be far faster, however, it's not thread safe, not at all, the atomic already is (when ordered correctly)...

In order to protect a flat operation one therefore has to wrap a separate lock, or mutex, around it which is far far more costly than the atomic operation.  This difference is called the "contention cost", the contention cost of an atomic is simply in the number of steps, lets look at code:

The atomic addition, the CPU itself will execute

lock xadd [or similar]

This itself is a single instruction, it may result in multiple cycles of the CPU to complete, but it is a single instruction.  It ensures unique ownership of the cache line (usually 64kb) within which this variable resides, and means if you perform an operation anywhere in that cache line you will be making optimal operations.  As the CPU can perform all the atomic updates in that 64kb block without having to fetch another, this is really useful when there are a few cores (2-8 on average) accessing an area of memory and event holds up when scaling out to more cores.

A mutex however, has to be controlled wholly separately from the increment, so we may end up with C++ such as this:

std::mutex lock;

uint64_t count { 0 }; 

{

    std::lock_guard<std::mutex> lockGuard { lock };

     ++count;

The execution here will have to acquire the mutex in a harsh manner, internally this is an atomic; if the pathway here is lock-free, then the atomic operation underlying the mutex is the only added cost.  However, and this is a HUGE however, if there is contention, someone else already has the lock then this lock-guard has to spin wait... And it's the contention, the other thing having the mutex locked, which adds the cost.

So you're essentially gambling on whether you have a value not contested before the lock or not, and in both cases you take on the cost of an atomic operation; so for my codebase and it's uses across sub 32 core machines means that an atomic is much more efficient in most all my use cases.

A mutex however is far more useful when protecting more than a single register, when protecting a whole block of memory, a shared physical resource (like a disk) or just a complex structure you can only use a mutex around it.

All this sort of eluded me earlier this evening, I was in the middle of a technical conversation and I bought up the atomics backing a shared_pointer in C++ and immediately just sort of lost it, my memory drifted far far away and I have to admit to waffling some.

I even forgot about weak_ptr deriving from a shared_ptr and it's uses to "lock" a new copy of the shared_ptr and so passing ownership by weak pointers.

But it came from a very specific conversation about Unreal engine, about TSharedPtr... Not a structure I myself have used, and for the life of me I could not think why not, I just knew not to having been told...

And of course here I sit a few hours later and I know why, it's not thread safe... TSharedPtr in Unreal is not threadsafe, and why is it not?  Well because it does not protect its internal reference count with an atomic, no it's just a flat inc and dec of a count integer register "for performance purposes".

So sure if you're inside one Unreal system, on one thread, then yeah you can use the TSharedPtr, but it's utility is much reduced to my eye, and you would want to perhaps look at other ways to hold your resources in that thread, even in thread local storage rather than in the engine heap.

The moment that TSharePtr crosses a barrier, then you're far far away from thread safe.

So what do you use a TSharedPtr for?  The documentation says "To avoid heap allocation for a control block where possible"... Yet it contains an explicit reference count, which is in a control block, and underlying it is a flat "operator new" and it uses only default deletion via the flat operator delete.... So my non-Unreal expert brain says "Why use it at all".

Hence when asked earlier today my memory was over the fence and far far away.... Of course now, it's all returned and... yeah I just sat through two hours of TV mulling this over.... and had to come write something down before I went a little bit odder than usual.

Tomorrow let me regale you with my story about forgetting how to load a crash dump and the symbols files and explain myself properly, despite doing that particular duty about a hundred thousand times in my life.  Hey ho, a technical conversation in which I fouled up, one is only human.

Saturday, 27 September 2025

More on Refactoring & Software Teams

I've had one of those moments where I have posted and just feel the need to elaborate on the topic, I enjoy that feeling, so here goes.

I posted about prototyping and that is such a very loaded word and indeed quite a broad subject, I'm not delving into anything specific in that field; however I am going to talk about my intent in why I enjoy revisiting, refactoring and the benefits I have found with those efforts.

First, refactoring, itself a simply enough principle, you can both simplify something and make it more readable, more maintainable, you can also refactor to take advantage of new advances from other technologies; perhaps a new library, or new framework, which achieves the same effect as your code written by your (perhaps) smaller team; and in my opinion going with a library written and peer-reviewed in use with hundreds, thousands or even millions of other users is really a boon to your assurance of its use over your own individual effort.

[Yes, writing your own can also be good].

 Second, revisiting, this is really the meat of the prior post; going back to your own code, or code in another part of a large system, write it your way, update the coding standard, conform to emerging or simply new best practices.  Doing this is really quite interesting, and you can through the magic of software do it side-by-side.

You can refactor the function "foo" with your own "bar" then simply swap the two function names and voila the original system process flow now uses your new code, and you can test it, try it and crucially roll back if you need to.

This is a great tool and the real reason I wanted to highlight the practice.

It isn't just limited to individual benefits, it can benefit team members around you, to both have the practice demonstrated to them and used as a catalyst to encourage them to engage with the code more widely; too often I see engineers rabbit hole into a single field, or become the defacto owners of only one vertical slice of a code base.

Such occurrences can lead to friction, what happens if they are absent through say illness or holiday, a problem occurring in their field becomes very important, the pressure ramps up on them, on the team and ultimately the project.  And there's no pressure value to relieve this, you rely solely, heavily, on that one person.  What happens if they do not like this work environment?  You are unable to support them!  They feel they are trapped!  There are so many negative effects from the sole-engineer/owner model that again I am beginning to touch on a whole other field of the software engineering puzzle.

So refactoring efforts to me become a way to familiarize multiple engineers into the general planform of a project, large or small, it helps each engineer support every peer in the group; they find common things to discuss, or collaborate on.

To me, working through a codebase, as a group, in cooperation is really important.

And it starts with one person in front of one screen tinkering and discovering. 

Friday, 26 September 2025

Software: A Rant About Change for Changes Sake

In software engineering I am one of those people who like to embrace change, I will in my own projects and even in professional situations (time permitting) stand up a new version of existing functionality in a side-by-side manner to try out new things, seek better solutions or performance or simply to understand how someone else's solution solved for the problem at hand.

This mentality broadly sits under the "Prototyping" method of software engineering, however, prototyping itself often says you should create a prototype in a wholly other language or platform than your target; a little like when planning to make a game in a brand new custom game engine you might want to prototype in an existing one, so benefit from both your own engineering expanding your eventual product, but building confidence and a marker post for performance & the content to which you can work.

The same is then true when I just want to elaborate on a single target piece of code and rework it.

I say all this because I like change.

What I can not abide however is change simply for changes sake, changes or reengineering something for no purpose.

The biggest villian in this space for me, sadly, has to be Microsoft with the Windows Operating system.

The only reason I even still run windows is to test games I'm working on (not even to run those games in most cases).

And today they've drawn my ire by reengineering, for no purpose to me, nor benefit to any user the Lock Screen.

You could just Windows Key and L your way to exiting to lunch or to go away securely from your desk, it's a natural reaction for me to lock the screen, even when I'm working at home!  So ubiquitous was I taught about security, through hard learned lessons and practical jokes.

So it irks me massively to return to my machine just now, find it has installed an update and now when locked I come to wake the machine it sits.... and sits... and does nothing... and nothing happens.... and you wait....

To the point I believed the machine was locked up or crashed.

When really it's opening four widgets on the lock screen.... Four..... Inane news articles which do not interest me, junk adverts and even the weather app I must have uninstalled three times on this machine, yet it returns constantly.

The delay?  Yeah, it was off loading whatever framework it needed to show these embedded widgets.... Some Javascript framework taking a gig of ram no doubt, which has taken the lock screen of instant, consistent and functional usage to a dismal mess I am going to have to disable for my own sanity.

Microsoft, just NO, stop doing this.

Why the widgets? And then you remove them, after trying to figure out what's wrong and even with the value set to "NO" do not show me these widgets it still adds another app to the list of things it can show, it's so insidious; not to mention slow.  And when you finally do disable these needless widgets?..... Oh it's still massively slow and doesn't work as it did before.

Yeah you sould press any key and wake the locked screen to give you a log-in prompt... now only escape seems to give me a log in screen.... I have no idea why that change was done, I presume whomever was tasked with this change just likes or was themselves used to using the escape key; but it's the many tiny little changes the gas lighting of "this used to work <that> way" only to find it changed - without any visual reconfiguration it just now works differently - it all just beggars belief. 

Wednesday, 17 September 2025

XVE: Making Meteors!!!

I've had a little bit of a roller coaster few weeks, I would recommend anyone feeling life taking them and rushing them to try and find a minute for yourself, take a breath and slowly exhale.

  

Wednesday, 6 August 2025

Why Remote Work Just Works (for Me)

When I started working as a programmer, my days began with a ritual that felt entirely normal at the time: over an hour of inbound commuting into the city, and then another long outbound journey home. Time, energy, money — all drained in the process. It was the cost of doing business, or so I thought.

Then the pandemic hit, and everything changed. Practically overnight, that daily routine vanished. The entire company transitioned to remote work in just a few days. Luckily, we had already laid some of the groundwork — tools, systems, and workflows that supported remote access. All we had to do was scale up.

And it worked. Customers experienced very little disruption, and internally, we barely missed a beat.

Now, years later, we’re still working remotely. And — here’s the thing — it still works.

Not just for the company, but for me. Personally. Deeply. In ways I didn’t expect.

  

More Time, More Focus, Less Waste

The first and most obvious benefit? I got my time back. No more two-hour round trips, no more standing on packed trains or sitting in traffic. That reclaimed time went straight back into my life — and into my work.

I’m more productive now. I’m more focused. I’m in control of my time, my energy, my attention. Sure, life shows up — a doorbell, a neighbour, or an unexpected distraction — but the tradeoff is still massively in my favour. I’ve spent time building out a dedicated workspace at home, optimized for deep concentration and comfort. It’s not a makeshift setup at the kitchen table. It’s mine, and it’s built for what I do.

With that setup, and without the daily grind of commuting, I find I spend more time at my desk, more time on task, and the quality of that time is better. It's not just about more hours; it's about more effective hours. My brain arrives to work fresh instead of depleted.

 

The Return-to-Office Push: A Puzzle

Despite all of this, there’s a message echoing out there in the corporate world: return to the office. The tone ranges from gentle encouragement to stern mandates. But I keep asking myself — why?

Why bring people back into expensive office buildings? Why shoulder the cost of maintaining spaces built for humans — with their endless needs for coffee, heating, lighting, safety drills, and ergonomic chairs — when the alternative is already working?

If a company needs physical infrastructure, great. Build a tech hub. Keep your servers somewhere secure, your dev environments humming. Machines don’t need water coolers or office parties. But humans — we’ve figured out how to work remotely, and for many of us, it’s been a genuine upgrade.


The Uncomfortable Truths?

Maybe not everyone shares this experience. Maybe not every job translates well to remote work. Maybe some people don’t have a dedicated space at home, or they’re working at the kitchen counter while the family or flatmates buzz around. Maybe their productivity really has dropped.

And maybe, just maybe, some of the voices calling us back to the office are those for whom remote work didn’t feel good — or didn’t look productive from their side of the camera. Managers who are used to seeing bums on seats might feel unease when they can’t “see” work happening.

I get it. It’s hard to manage outcomes instead of hours. It’s hard to trust that people are working when you can’t walk by their desk. But is that really a reason to ignore all the gains?

  

What Does the Data Say?

I’d love to dive into studies on this — real data about productivity in remote vs. office environments. But I want more than just headline numbers. I want to know:

  • What kind of work were people doing?

  • Did they have a dedicated workspace at home?

  • Were they experienced at remote work, or thrust into it overnight?

Because I believe my personal productivity boost comes not just from being home, but from investing in a space that lets me focus, and in habits that support remote productivity. Without that, maybe the experience would be different.

 

It’s Not One-Size-Fits-All

This isn’t a blanket statement that everyone should work remotely, or that every company should shut its offices. But it is a reminder that — for many of us — the shift to remote wasn’t a compromise. It was an evolution.

We cut out inefficiencies, reduced stress, and created more sustainable workdays. And that’s not nothing.

So, when I hear the call to return to the office, I pause. Not out of resistance, but out of honest curiosity: What are we returning for? Is it about culture? Control? Collaboration?

Because if it’s about productivity — well, for some of us, remote work already won that argument.

 

My Conclusion

Remote work isn’t perfect. But it’s real, and it’s working. At least for me — and I suspect for many others too.

Maybe it’s time to stop viewing remote work as a temporary measure or a compromise, and start treating it as what it has proven to be: a legitimate, powerful, and in many cases superior way to work.

Let’s be thoughtful. Let’s look at the data. Let’s listen to the wide variety of experiences out there.

But let’s not forget: commuting two hours a day wasn’t normal. It was just what we got used to.