Eventsourcing is probably among the most controversial and tricky design principles. In “classical” application design, the state is written to the persistence store, mutated, and fetched from the DB on virtually every operation, while events causing state changes are transient and discarded the moment the change is applied. In eventsourcing, to the contrary, events are written to the store, never mutated, and read from the DB on rare occasions; while the state is transient and obtained from the log of events. To some extent, eventsourcing is like a mirror reflection of the “classical” approach. One day I and my team embarked on a journey through it - and this post is the beginning of the story.
Code reviews are an integral part of modern software development workflow and a highly debated one. A lot is said on why code reviews are important (e.g. Atlassian, Fullstory), how to do them (Google), what to look for (JetBrains), etc., etc. There’s no shortage of “internet wisdom” on the topic, but there’s one quite common flaw that might influence “your mileage” from code reviews quite significantly and cause them to harm your team’s productivity. In short, different aspects of the code (design, performance, security, style, naming, etc.) have a very different cost of making a mistake vs. cost of finding it during code review. Let’s take a look at how this affects the review process (“what to look for”) and a few techniques that can help improve it.
It wouldn’t be a major overstatement to say that majority of the applications - at least in the startup and enterprise world - and are built to model and automate certain real-life business processes. As such, the application inevitably has to have a model - an idealized representation of the “domain” - the entities, events, and interactions found in the real-life process. The application also has to do something useful with that model, so it has to interact with the real world (through UIs, printers, actuators, etc.) and other applications (such as other services, databases, queues, etc.). A common shortcut (and/or a caveat) is to use the same model to fulfill all these needs - however, this isn’t always the best course of action. Let’s take a look at other options.
Dependency Injection is a well-known pattern and de-facto standard for implementing a Dependency Inversion Principle. Most modern frameworks have some level of support for Dependency Injection - from weaving the application via public setters at runtime using XML as a spec (e.g. Java Spring), to compile-time constructor injection (e.g. macwire). However, while doing most of the heavy lifting, these tools and frameworks leave capturing the more sophisticated and valuable promises of DIP to the developers. Sadly, most of the time the result is…. suboptimal - that is to say it is not completely wrong, but could have been better. This shortcoming is subtle, but “getting it right” often solves or even removes a lot of other questions/concerns - including some that spawn “both implementations are fine, let’s discuss which one to choose till the thermodynamic death of the Universe” discussions.
The goal of any distributed system is to provide better availability, throughput, data durability, and other non-functional concerns, compared to functionally similar non-distributed system. Principles of distributed systems design sometimes crystallize into a short and expressive “mantras” - such as “eventually consistent” or “no single point of failure”. They are extremely useful, allowing expressing otherwise complex concepts in a short and unambiguous way, but sometimes are a little bit too broad and cover (or deny) more ground then they should. Specifically I’m talking about the “no single point of failure” principle - turns out there are many dramatically successful distributed systems that violate this principle at their core. Let’s look at what do they do instead.