Minor updates and code changes occur every day. Only significant or noteworthy updates are shown here. Updates shown with a gold background are (or were at the time) only available to Advanced HOPS members.

Search: Clear


Time DateSystem Updates
Update 1129
16 January 2025
Thank you to everyone for your patience during the maintenance time this morning. We're pleased to say that the work was completed and we're now hosting HOPS as a container cluster.

There are now redundant copies of HOPS running invisibly in the background, at the same time, waiting to handle all your requests. A huge amount of work has taken place behind the scenes over the past six months to move the HOPS infrastructure over to a container platform and our thanks go to James Walker at Heron Web for undertaking this upgrade with us.

Development on HOPS started over 15 years ago and at that time the traditional method for hosting a website was to rent a server with a website host. Over the years we've moved HOPS to bigger and faster servers but in 2024 we started to approach the limits of what's possible with that type of hosting. Concurrently, an excellent uptake in Retail Systems customers last year made it clear we needed to focus on reliability and response time as our next big project.

A more modern approach to website hosting is to break the system down into smaller self-contained parts - containers - and run multiple copies. Each part of the system has a file, like a cooking recipe, that describes how to use it as a container. Once running the containers themselves are short-lived and don't store any data - so they can be added and removed from the overall system very quickly and reliably. We've been using this system to host our frontend user interfaces - New HOPS, Time Register, Point of Sale and the retail websites - for over two years so we're confident that it meets the demands of those services.

When users visit a HOPS page they are now invisibly distributed between the containers, so that the load is balanced. If one of the containers fails it is automatically removed, and a new container is instantly added to the cluster to replace it. If we expect, or experience, a period of high use (e.g. Steam Gala) we can easily add more containers to the pool to handle the extra load. All containers share the same HOPS database (which is also separately balanced across multiple copies) so there is no possibility the containers will go out-of-sync with each other or have stale data on them.

Our software development team can also use these recipe files to run an exact copy of HOPS on their own computers more reliably. Developers can test upgrades more thoroughly and know that next time the containers are recycled HOPS will be running from the same recipe, reducing the risk associated with future updates and changes.

This has been a huge migration (as big, if not bigger, than "New HOPS" a few years ago) so we'd appreciate your patience over the next week or two while things settle down in their new home. Please report any problems to your HOPS Admin, and HOPS Admins can open a Support Ticket in the usual place.