# What to think during NALSD interview

There are a lot of posts about what a typical coding interview at Google looks like. But, while not as widely described and discussed, there is also quite often a system design interview. For an SRE position it’s NALSD: non-abstract large system design. The key difference between SWE and SRE interviews consists in these two letters: NA.

So, what is the difference? How to be prepared for this interview? Let’s be non-abstract, and use an example. To be more non-abstract, let’s take something from the material world, such that you won’t be asked the exact same thing at the real interview (at least, not at the Google interview) :)

So, let’s design a public library system. For the paper books, like you have seen everywhere around. The whole text below was written all at once within around one hour, to roughly show you the areas that you should be able to cover / touch during the interview. Please excuse some disorder, that’s how I think (therefore I am).

Time is up. So, what’s so special in NALSD question? Scalability is expected from any large system design. But in a NALSD interview, one has to be

You can see above a lot of guesses and assumptions, but all of the calculations were based on previously made numbers. I have also tried to explain how to get to the right number, but it’s easy to get tired/bored and start missing to do so. Also, it’s easy to stick with some number from your memory, without providing a good explanation of it’s reasons… But since the design is very specific, if you make a mistake for any number, you can change the number and just recalculate the rest.

During the interview any of your assumptions and guesses could be corrected by the interviewer. Also, the interviewer could add some additional constraints (which you forgot, didn’t know about, didn’t ask about or simply to adjust the task because they would like to or think this is a better way to continue). Since you are operating solely on specific numbers, that would just require small updates of numbers following the already given equations.

If the correction of an assumption requires a full redesign of the system, it’s either a design error, or an incorrect correction, or the new assumption moves us beyond the applicability of the design (and it is not so rare in real life). It’s important to not miss this fact: you should validate the numbers that you get, to ensure they are realistic, both during the design and after any correction.

As SREs, we have to think about the scalability of real systems within the constraints of real hardware limitations. There could be a beautiful algorithm that exchanges a gigantic time complexity to some amount of RAM… But still, 1PiB of RAM per 1 CPU is not something that we can build today. So if we have 1 PiB of RAM, we also have around 10k CPUs. Or 20k. Or even 30k. That means that we should focus on the real (local) optimum, reachable today in reality, and not the global optimum reachable in ideal conditions.

You shouldn’t remember precise numbers, but you should be able to apply rules of thumb to get orders of magnitude. Like, IOPSes: HDD is around 100s, SSD around 100s thousands. But these 100s thousands come with a price difference between HDD to SSD such as 1:3 or 1:4. It’s also ignoring everything around: rack space, blades for the drives, network ports and other things, that are no longer cheap once you work at peta- and exa- scales.

Now get comfy in your chair, relax, and try to design a system of growth and delivery of fresh chicken eggs on subscription.

So, what is the difference? How to be prepared for this interview? Let’s be non-abstract, and use an example. To be more non-abstract, let’s take something from the material world, such that you won’t be asked the exact same thing at the real interview (at least, not at the Google interview) :)

So, let’s design a public library system. For the paper books, like you have seen everywhere around. The whole text below was written all at once within around one hour, to roughly show you the areas that you should be able to cover / touch during the interview. Please excuse some disorder, that’s how I think (therefore I am).

## NALSD interview: design a public library system.

First of all, we gather load characteristics, either via questions to the interviewer, or by making a reasonable guess (without forgetting to state it aloud). Since the interview concerns scalable systems, we expect a city with a large population (say, 1M+). We have a few options: one huge building, or local libraries plus storage. I think the latter is more reasonable, as it also allows us to expand the system to multiple cities.

Now let’s focus on the system for the single city; we should be able to apply the same design (with minor corrections) again to expand it to the country level. So, we have a city with 1M+ population. To make the calculations easy, let’s round to 1M possible readers. Every reader wants to read something independently from each other reader, at arbitrary moments in time. So we can model the readers stream as a Poisson process. Doing the math right is a bit difficult during the interview timeframe, so let’s simplify one more time, and just say that about 1% of the possible readers would come to take a book every day. So presume 10000 books requests per day.

So, our task now became: provide the ability to hand out 10000 books a day (that also means, we need to be able to take back these 10000 books someday later) in a 1M+ city. This estimation shows that the single-building solution won’t stand (to provide the books for the 1M+ people, our library should be reachable within a reasonable amount of time, otherwise their will to take a book will timeout in the traffic jam). So, let’s try to guess what is a reasonable size for the local libraries that we have to build. An absolutely correct estimation would involve population density, reachability latencies etc, but since this won’t affect the overall design, we can again make it simpler by considering a uniform distribution of similar units. We still don’t know how big should the units be;, for instance, we could spread 10000 libraries with 1 book each across the city, but it obviously won’t work. Going one level lower.

A single local library unit. In order for it to be useful, the majority of visitors should be able to find the book they want. That means that we need to track the usage and forecast the demand for the most popular requests in order to be ready to serve them. Additionally, we should have a wide range of less popular items. My rough guess is a library with less than 1000 book names, with dozens of copies for the most popular ones and single pieces for the tails. The average time to read a book in our library ranges from 3 days to 2 weeks (of course, the real time depends on the book itself, which in a real case would require additional analysis); let’s estimate as 1 week per book. That means that a book which was handed out will come back in 1 week, so we should have supply of top books for 1 week (after a week they get replenished from the returns).

Suppose an average blow-up factor of 6, which would lead us to a unit storage capacity of 6000 books. Less storage would mean a shortage in the most popular names (well, smaller units could still be useful — like, a small Mall Kiosk with “Twilight”, but let’s keep it outside of our design now).

In a steady state, returns and borrowings every day are almost similar (with some dispersion), but we should also plan for peak returns (which could be external sync events: holidays, trends, etc). The correct way is to make and run a statistical model. But for right now, let’s just keep ⅓ as a buffer. That means that we should keep 6000 books ready for hand out plus space for 2000.

To sum up: we need a library unit with 8000 books storage. It’s expected to be costly to restock it every day, so we should expect this to be enough for a week or two. Two weeks, 6000 books, with possible returns skew, it’s around 300 books a day. For the opening day we can fill the whole storage (8000 books) to seed the initial 2000 of books in turnover. 2000 in 3 days = 600 books a day, with skew buffer = 800 books a day.

Let’s estimate throughput and limitations of our storage. 1 book is around 2 linear cm of space, 8000 books = 160 meters. We can fold it vertically 4 times, so it’s 40 linear meters. If we split it into ~5-meters racks, we will get 8 racks with 4 shelves, each 5 meters long. One librarian (or robo-librarian) should be able to work with two racks; an extraction of a single book will take: 5 seconds to walk to the shelves, 5 seconds to grab/put book into its place, 5 seconds to walk back(15 seconds total). 4 librarians will give us a peak throughput of 15 books per minute. Thus, a handling of 900 books per hour at most.

If we also account the request handling time (10s), the search time (5s), the time to track into the system (2s), we’ll get 400 books/hour peak. So, we should be able to handle our estimation of 800 books a day in 2 hours.

Let’s calculate other aspects: in order to be able to give out 400 books/hour by 4 librarians, we need to have enough space for 100 people waiting in line, and the entrance doors should allow 400 people/hour to walk through in both directions. That means that our main throttling would be the waiting space and building doors.

That means that we should be able to find out an optimal balance between the storage space and waiting area during the proper calculations for the real design.

That’s it for this level, it’s time to move on. We have an estimation of the overall library network demands as 10000 books/day. One unit is 800 books/day, so we need 12.5 units. If we do a uniform geographic distribution, every reader should be within the range of 1-2 units (on city boundary) and 3-4 (inside city). That would give the ability to smooth out peaks a bit and/or cover shortages of top books.

We also know that every library can be closed at any moment in time (fire, inspection, fridge door handles repaint etc), and that the more units we have, the higher is the probability of coincidence of these events. Thus, we need 1 redundant unit for every 5-6 units. So, in total, we need 15 units to fulfill our city demands, if we will maintain the required supply within each unit.

As we already thought earlier, we need to replenish the storage every week or two (earlier we guessed two) — around half of the storage should be replaced, to follow the trends etc. So 4000 books of new delivery and 4000 books removed from it. On every restock, we need to extract 4000 books out, and insert 4000 new entries back. With our 400 books/hour throughput, one restock will be 10 very intense hours. Well, it still seems to be OK, especially since bulk operations are usually cheaper; but anyway — we should keep in mind that the restock operation is 5 times more expensive than just serving.

An average book (2cm*20cm*30cm) is 1.5 liters of volume, so 4000 books = 6 cubic meters. This should fit into a single average light cargo vehicle. 1 cubic meter of paper weights 600kg, 6m^3 = 3.6 tonnes. An average light cargo vehicle is able to handle ~1.5 tonnes, so we need 3 of them. Plus one for redundancy. We have 15 library units refreshed once every 2 weeks, which means this would be tight on schedule, so we need one more car.

And we still did not think how and from where would these cars move the books, so our design will also need supplier storehouses and points of garbage collection for no longer popular books…

Time is up. So, what’s so special in NALSD question? Scalability is expected from any large system design. But in a NALSD interview, one has to be

**specific**.You can see above a lot of guesses and assumptions, but all of the calculations were based on previously made numbers. I have also tried to explain how to get to the right number, but it’s easy to get tired/bored and start missing to do so. Also, it’s easy to stick with some number from your memory, without providing a good explanation of it’s reasons… But since the design is very specific, if you make a mistake for any number, you can change the number and just recalculate the rest.

*I still remember how during my NALSD interview I stuck with IOPS of a single HDD equals to 600, just because I had recently worked on optimisation of some raid array, where the whole array had 600 IOPS… The interviewer was slightly surprised: 600 IOPS per drive? :D*During the interview any of your assumptions and guesses could be corrected by the interviewer. Also, the interviewer could add some additional constraints (which you forgot, didn’t know about, didn’t ask about or simply to adjust the task because they would like to or think this is a better way to continue). Since you are operating solely on specific numbers, that would just require small updates of numbers following the already given equations.

If the correction of an assumption requires a full redesign of the system, it’s either a design error, or an incorrect correction, or the new assumption moves us beyond the applicability of the design (and it is not so rare in real life). It’s important to not miss this fact: you should validate the numbers that you get, to ensure they are realistic, both during the design and after any correction.

As SREs, we have to think about the scalability of real systems within the constraints of real hardware limitations. There could be a beautiful algorithm that exchanges a gigantic time complexity to some amount of RAM… But still, 1PiB of RAM per 1 CPU is not something that we can build today. So if we have 1 PiB of RAM, we also have around 10k CPUs. Or 20k. Or even 30k. That means that we should focus on the real (local) optimum, reachable today in reality, and not the global optimum reachable in ideal conditions.

You shouldn’t remember precise numbers, but you should be able to apply rules of thumb to get orders of magnitude. Like, IOPSes: HDD is around 100s, SSD around 100s thousands. But these 100s thousands come with a price difference between HDD to SSD such as 1:3 or 1:4. It’s also ignoring everything around: rack space, blades for the drives, network ports and other things, that are no longer cheap once you work at peta- and exa- scales.

Now get comfy in your chair, relax, and try to design a system of growth and delivery of fresh chicken eggs on subscription.