Queueing Theory is perhaps one of the most important mathematical theories in systems design and analysis, yet only few engineers learn it. This talk introduces the basics of queueing theory and explores the ramifications of queue behavior on system performance and resiliency with emphasis on async and Node.js behavior
"Have you tried turning it off and on again?" is one of the most common jokes in our industry. However, behind the comical aspect lies a fundamental architectural pattern - state encapsulation and recovery by clearing state data. This pattern is mostly used implicitly, but when used as a design principle unlocks a new paradigm of programming: Recovery Oriented Computing (ROC).
Traditionally, ACID is at the core of database usage. Like the JVM memory model it represents the fundamental promises made by databases to programmers, ensuring data sanity in an insane world. Unfortunately, ACID as imagined by programmers exists only in fantasy
In recent years async I/O has become one of the main trends of the industry. On the JVM, libraries such as Netty and Finagle have commoditised async I/O to the extent that it is now the go-to model for many projects. Yet despite its growing popularity, async I/O remains problematic and misunderstood on many levels; Plagued by complexity, compatibility and maturity issues for years, slow progress is being made to provide ample async solutions for the future.
We all want fast services, but how fast is fast? Would you work hard to shave off a millisecond off the mean latency? The 99th percentile? If aiming for 300ms latency, you might answer "probably not." However, due to various phenomena collectively known as "latency amplification," a single millisecond deep in your stack can turn into a large increase in user-visible latency—and this is very common in microservices-based systems. What is the true cost of a millisecond?
When we build systems our design and tradeoffs reflect the different scales of the system: the speed of disks, latency of network; They reflect the constraints and abilities of the underlying technologies. But as technology advances some of these assumptions have become invalid. We are no longer running on physical machines for which RDBMS systems were designed; SSD changed pretty much everything in the storage world, but our software was designed for magnetic disks; NVRAM? O/S design is way off. This talk explores how changes in hardware technologies impact design rational of various systems, highlighting the importance of understanding and rethinking the design rational and explore new designs that arise from the new rational.
HTTP is the de-facto standard protocol of the internet and heavily used in almost all systems - in depth understanding of HTTP is crucial for design, performance scaling and day to day operations.
Debugging is often considered a mysterious trait that some engineers were born with, but alas, some simply haven't. This talk is here to bust that myth; with well-structured methodology and a couple of simple tips, we can all master debugging and stop using trial and error
We've all heard numerous "awesome monitoring @ X" talks; Boring! Join me in exploring monitoring design principles through various fails - because we can learn sooo much more by analyzing cases where monitoring was done wrong :-)
10 years ago, we promoted the move from pet systems to faceless hordes of electronic cattle grazing on commodity infrastructure. But as the evolution of the cloud progresses we find that the cattle methodology is no longer sufficient and that cloud native systems resemble some other biological entity…