255: How To Avoid Small Choices And Design Your Application To Scale Big.
Description
How do you design your application so it scales well to a big size? Scaling needs to be verified early in the design to prevent costly mistakes that usually appear later. You can scale in many ways. The number of users, amount of data, and code size are common. Avoid hard limits in the code and leave room to grow. Test large amounts of data and users even if they’re not real.
This episode describes some design decisions I made recently to let my game handle a large number of game objects.
If you’d like to improve your coding skills, then browse the recommended books and resources at the Resources page. You can find all my favorite books and resources at this page to help you create better software designs.
Listen to the episode for more details or read the full transcript below.
Transcript
By the time you notice that your program will not scale, it might be too late for simple fixes.
There’s a balance here that will come with experience. But we usually only learn from experience when we fail. What kinds of failures am I talking about and how can you notice them before they become bigger?
When we’re in school learning how to program or reading a book or even watching an online video, the goal is usually to teach some specific concept that you can use later. There’s lots of little things you should be considering but these would only get in the way of learning the main idea. So they’re left out of the topic.
There’s just too many opportunities to get lost in the details when learning something new. The problems you’ll be learning about and solving will be very specific to the idea and small.
Let’s say that you’re first learning how to count. It might be okay to use your fingers at first. The problems you’ll be working with are designed for this. Things like 2 plus 3 are easy to visualize with fingers. Even some subtraction can be done with fingers.
But try to scale the problems so they’re bigger with sums in the thousands or millions and you run into difficulty.
The same thing happens in programming. If you need to keep track of items, then you might want to use a vector. It’s easy to push new items on to a vector. And you have simple ways to find things and remove them from vectors.
But you need to know when to use a vector in any type of real software application. There are times when it’s the absolute best choice. And I’m not just talking about when the sizes are small.
It can scale very well to large solutions when used properly.
Because it’s so simple, it’s like using your fingers. So a lot of books and classroom lectures and videos will use it when explaining another topic. I have a book right now that I’m using to get ideas for the game library that I’m working on.
Finding something in an unsorted vector means that you have to examine each item one by one to see if it’s the one you want. So this book came up with the idea to use a bitmask first just to know if a particular item exists in the vector. A bitmask lets you quickly test if a binary bit is set to one and if so, then use that as a signal that the item you want is somewhere in the vector. This helps to avoid searching through the whole vector only to end up empty-handed. It’s better to know quickly and avoid the search if it’s not there.
What happens if the bitmask says the item should exist in the vector? Then the code will have to start visiting each item to see if it matches.
Beyond the time needed to check each item, there’s another scalability problem with this design.
Because the book doesn’t use an expandable bitmask, the design has a limited number of items it can support. This number can be either 32 or 64 items.
To me, this is a bigger limitation that will prevent an application from growing bigger. A vector can always be changed with a different data structure. But the use of this bitmask has a broader