[{"content":"1. Goal and Motivation This demo project explores how to assign unique, dynamic sequence numbers to multiple server instances in a distributed environment, inspired by challenges that surface when designing things like global ID generators (e.g., Twitter’s Snowflake). As described in the project’s README, the scenario began as a typical interview problem: how do you ensure each server instance receives (and keeps) a unique ID, especially if instances are started, stopped, or restarted?\nZooKeeper is leveraged here to coordinate between all node servers, and to manage these lifecycle and uniqueness challenges in a robust, automatic way.\nNOTE: please, do not take this code example as production ready, since it is a demo, we are not taking special care of important stuff as concurrency, mostly at the time of managing the connections.\nReferences: Demo Github Repo Nice Medium post that helped me to understand what ZooKeeper is Nice Zookeeper guide ZooKeeper Docker Image ZooKeeper Dotnet Library 2. What is ZooKeeper, and How Does It Help? ZooKeeper is a distributed coordination service. It can be used to manage configuration, naming, synchronization, and group services for large distributed systems. ZooKeeper organizes its data in a file-system-like hierarchy called \u0026ldquo;znodes.\u0026rdquo; These znodes can store both data and metadata, and are kept in sync across the ZooKeeper ensemble.\nKey features Ephemeral znodes: These are znodes that exist as long as the client that created them maintains its connection—perfect for live service registration. Watcher mechanism: Clients can set watches to be notified about specific changes (e.g., a znode disappearing, data changing, or children being added or removed). This enables real-time (\u0026ldquo;live\u0026rdquo;) updates and coordination between services. In this demo:\nEach server instance tries to claim an ID by creating a numbered ephemeral znode under a parent /sequence node. If an instance dies or disconnects, its znode is automatically deleted, freeing up its number for future instances. New instances dynamically see available (\u0026ldquo;free\u0026rdquo;) numbers and claim the lowest unused one. 3. Using ZooKeeper znodes to Track Instance IDs (Walkthrough of the Demo) The core logic of assigning and tracking instance sequence numbers lives in the ZookeeperDistributedConfiguration class.\nMain Elements of the Implementation Sequence Node Initialization: The parent /sequence znode is created if it doesn\u0026rsquo;t exist. Assigning IDs: Each server, on startup, looks for the lowest available number (by checking the children of /sequence), and tries to create an ephemeral znode like /sequence/1, /sequence/2, etc. Ephemeral znodes: The use of CreateMode.EPHEMERAL ensures znodes are removed if a server disconnects, making numbers immediately reusable. Live Coordination: When servers join or leave, the list of children under /sequence is updated live, so every instance knows which IDs are in use and which are free. Connection Handling: Robust connection and reconnection logic (with retries and watcher callbacks) is implemented using ZooKeeper’s watcher/event system. Example: ID Allocation Logic private async Task AssignSequenceIfNoAsync(CancellationToken cancellationToken) { await InitializeAsync(); if (IsSequenceAssigned()) return; var created = false; var triesLeft = 5; while (!created \u0026amp;\u0026amp; triesLeft \u0026gt; 0) { var assignedSequenceNumbers = await GetAssignedSequenceNumbersAsync(cancellationToken); if (assignedSequenceNumbers.Count == 0 || assignedSequenceNumbers[0] != FirstSequence) { _sequenceNumber = FirstSequence; } _sequenceNumber = FindFirstFreeSequenceNumber(assignedSequenceNumbers); created = await AssignSequenceNumberAsync(_sequenceNumber, cancellationToken); triesLeft--; } } Summary of C# Implementation Minimalist and Direct: The approach is intentionally simple, focusing on basic ZooKeeper primitives rather than advanced recipes or third-party libraries. Classes of Interest: ZookeeperDistributedConfiguration: Main logic for sequence assignment. ZookeeperConnection: Handles the ZooKeeper client, connection retries, and reacting to ZooKeeper events. ZookeeperWatcher: Implements event/subscription logic to handle live updates triggered by ZooKeeper state changes. 4. Practical Demo: Running \u0026amp; Testing the Distributed Sequence Server This section walks you through running and testing the distributed sequence number server using Docker and Docker Compose. The scenario is based directly on the steps and demos from the project README.\nPrerequisites Docker and Docker Compose are installed. Clone or download the repo from here: Demo Github Repo Step 1: Start ZooKeeper cd ./code docker compose up -d zookeeper Now you have a simple ZooKeeper instance running.\n(Optional) Test the ZooKeeper Instance docker ps # or docker container ls # Find the zookeeper container ID, then # Connect to ZooKeeper CLI: docker exec -it \u0026lt;container-id\u0026gt; zkCli.sh # In ZooKeeper CLI, try: ls / Should see:\n[zk: localhost:2181(CONNECTED) 1] ls / [zookeeper] Step 2: Build and Run SequenceNode Instances Build the SequenceNode Docker image:\ncd ./code/SequenceNode docker image build -t sequencenode --target prod . Start multiple SequenceNode containers:\ndocker run -p 5001:80 --name sequencenode1 --network code_default -d sequencenode \u0026amp;\u0026amp; \\ docker run -p 5002:80 --name sequencenode2 --network code_default -d sequencenode \u0026amp;\u0026amp; \\ docker run -p 5003:80 --name sequencenode3 --network code_default -d sequencenode \u0026amp;\u0026amp; \\ docker run -p 5004:80 --name sequencenode4 --network code_default -d sequencenode Note: The code_default network is created by compose by default. If different, adjust the --network flag accordingly.\nStep 3: Testing Open a terminal or browser to test the sequence API. Using curl: curl http://localhost:500x/sequence Replace x with 1, 2, 3, or 4, depending on which container you want to test.\nWe should get:\n% curl http://localhost:5002/sequence 2% Using a browser: http://localhost:500x/swagger (API docs) http://localhost:500x/sequence (direct endpoint) Demo Sequence Query sequence for 1, 2, and 3. Stop 2: docker container stop sequencenode2 Query for 4 (should now claim the freed number 2!) Restart 2: docker container start sequencenode2 Query 2 again, now should assign sequence 4. Notes on ZooKeeper Container If you stop the ZooKeeper container, all sequence API requests will fail. When restarted, requests work again, but ephemeral znodes may not be deleted; new sequences will start after the last inserted (e.g., 5 in the running example).\nStep 4: Test with Docker Compose Scaling Change the number of replicas in docker-compose.yml, e.g. from 1 to 4. Run: docker compose up -d All replicas start up. Access via Docker networking: Connect directly to containers:\ndocker ps # Find container ID docker exec \u0026lt;container-id\u0026gt; wget -qO- http://localhost/sequence Or create an Alpine container attached to the same network:\ndocker run -it --rm --network code_default alpine sh wget -qO- http://node/sequence wget -qO- http://node/sequence wget -qO- http://node/sequence ... This approach lets you observe ephemeral sequence allocation under load balancing and real distributed conditions.\nThis demo exemplifies a hands-on, minimal setup for learning about ZooKeeper, distributed coordination, and ephemeral resource assignment using znodes. It also illustrates how even a basic C# implementation can make use of these powerful coordination patterns.\n","permalink":"https://blog.rulyotano.com/articles/zookeeper/","summary":"\u003ch2 id=\"1-goal-and-motivation\"\u003e1. Goal and Motivation\u003c/h2\u003e\n\u003cp\u003eThis demo project explores how to assign unique, dynamic sequence numbers to multiple server instances in a distributed environment, inspired by challenges that surface when designing things like global ID generators (e.g., Twitter’s Snowflake). As described in the project’s README, the scenario began as a typical interview problem: how do you ensure each server instance receives (and keeps) a unique ID, especially if instances are started, stopped, or restarted?\u003c/p\u003e","title":"Building a Distributed Sequence Generator with ZooKeeper and C#"},{"content":"\u0026ldquo;System Design Interview - An Insider\u0026rsquo;s Guide: Volume 2\u0026rdquo; by Alex Xu and Sahn Lam Book Structure The book includes 13 chapters, each dedicated to a specific system design problem. It follows a structured approach:\nProblem Requirements: Each chapter begins with an exploration of the functional and non-functional requirements of the system to clarify scope. Design Steps: A step-by-step approach to breaking down the system design, with emphasis on high-level architecture and detailed component design. Component Analysis: Each section dives into specific components like databases, APIs, and caching, explaining their roles and interactions. Scalability and Optimization: The book provides strategies for scaling the system and handling large data sets effectively. Key Chapters and Topics Proximity Service \u0026amp; Nearby Friends: Techniques for real-time location services and friend-finding applications. Google Maps \u0026amp; Distributed Message Queue: Advanced topics in geospatial indexing, message queue systems, and handling large-scale asynchronous messaging. Metrics Monitoring \u0026amp; Ad Click Event Aggregation: Designing for high-traffic systems and aggregating real-time data streams. S3-like Object Storage: Building a scalable storage system using principles like data partitioning, replication, and consistency. Payment System \u0026amp; Digital Wallet: Securing transactions, ensuring reliability, and handling data privacy in financial applications. Diagrams and Frameworks The book includes 300+ diagrams to help visualize complex systems, making it easier to understand the interactions between different system components. The 4-step framework (requirements, high-level design, detailed design, and trade-offs) is emphasized throughout, helping readers systematically tackle interview questions.\nThis volume is particularly useful for those aiming for senior engineering roles, as it covers complex, large-scale systems with a practical, problem-solving approach.\n","permalink":"https://blog.rulyotano.com/books/system-design-2/","summary":"\u003ch2 id=\"-by-alex-xu-and-sahn-lam\"\u003e\u003cstrong\u003e\u0026ldquo;System Design Interview - An Insider\u0026rsquo;s Guide: Volume 2\u0026rdquo;\u003c/strong\u003e by Alex Xu and Sahn Lam\u003c/h2\u003e\n\u003ch3 id=\"book-structure\"\u003eBook Structure\u003c/h3\u003e\n\u003cp\u003eThe book includes 13 chapters, each dedicated to a specific system design problem. It follows a structured approach:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003e\u003cstrong\u003eProblem Requirements:\u003c/strong\u003e Each chapter begins with an exploration of the functional and non-functional requirements of the system to clarify scope.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eDesign Steps:\u003c/strong\u003e A step-by-step approach to breaking down the system design, with emphasis on high-level architecture and detailed component design.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eComponent Analysis:\u003c/strong\u003e Each section dives into specific components like databases, APIs, and caching, explaining their roles and interactions.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eScalability and Optimization:\u003c/strong\u003e The book provides strategies for scaling the system and handling large data sets effectively.\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch3 id=\"key-chapters-and-topics\"\u003eKey Chapters and Topics\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eProximity Service \u0026amp; Nearby Friends\u003c/strong\u003e: Techniques for real-time location services and friend-finding applications.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eGoogle Maps \u0026amp; Distributed Message Queue\u003c/strong\u003e: Advanced topics in geospatial indexing, message queue systems, and handling large-scale asynchronous messaging.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eMetrics Monitoring \u0026amp; Ad Click Event Aggregation\u003c/strong\u003e: Designing for high-traffic systems and aggregating real-time data streams.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eS3-like Object Storage\u003c/strong\u003e: Building a scalable storage system using principles like data partitioning, replication, and consistency.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003ePayment System \u0026amp; Digital Wallet\u003c/strong\u003e: Securing transactions, ensuring reliability, and handling data privacy in financial applications.\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3 id=\"diagrams-and-frameworks\"\u003eDiagrams and Frameworks\u003c/h3\u003e\n\u003cp\u003eThe book includes \u003cstrong\u003e300+ diagrams\u003c/strong\u003e to help visualize complex systems, making it easier to understand the interactions between different system components. The \u003cstrong\u003e4-step framework\u003c/strong\u003e (requirements, high-level design, detailed design, and trade-offs) is emphasized throughout, helping readers systematically tackle interview questions.\u003c/p\u003e","title":"System Design Interview - An insider's guide: Volume 2"},{"content":"\u0026ldquo;System Design Interview – An Insider’s Guide\u0026rdquo; by Alex Xu is a highly regarded resource for software engineers and architects preparing for technical interviews, particularly those focused on system design. The book provides practical and structured approaches to solving complex system design problems, which are commonly part of interviews at top tech companies.\nHere’s a summary of key aspects of the book:\n1. Structured Approach to System Design Problems The book introduces a clear and systematic way to tackle system design questions during interviews. It emphasizes a methodical approach, which typically includes:\nUnderstanding the problem: Asking clarifying questions to identify the real needs. Defining system requirements: Listing functional and non-functional requirements like scalability, availability, and latency. High-level design: Breaking down the system into key components and describing how they interact. Detailed design: Diving deeper into important components (e.g., databases, caching, APIs, etc.). Identifying bottlenecks: Considering scalability, reliability, and performance issues and addressing them with solutions like load balancing, sharding, etc. 2. Commonly Asked System Design Problems The book covers real-world system design problems frequently asked in interviews, such as:\nDesign a URL shortening service (similar to Bit.ly) Design a messaging system (e.g., WhatsApp) Design a social media newsfeed system Design an API rate limiter Design an e-commerce platform 3. Design Components and Trade-offs Alex Xu discusses key components and tools that are frequently used in system design:\nDatabases: SQL vs. NoSQL, replication, partitioning (sharding) Caching: To improve system performance and reduce database load Load balancing: Distributing traffic across servers to handle high traffic Data consistency and availability: CAP theorem (Consistency, Availability, and Partition Tolerance) and choosing the right balance depending on the system’s needs Latency and throughput: Optimizing for speed and capacity 4. Scalability and High Availability The book dives deep into concepts like horizontal vs. vertical scaling, how to handle system failures, and ensuring the system stays up even under high load. It covers techniques like replication, load balancing, and designing for eventual consistency.\n5. Diagrams and Visual Explanations Each design problem in the book is accompanied by visual diagrams. These visuals help readers understand the flow of data between components and illustrate how different parts of a system interact.\n6. Real-world Considerations Alex Xu emphasizes the importance of thinking beyond theoretical design. He covers real-world trade-offs that engineers have to make, such as balancing performance with cost or dealing with inconsistent data when scaling systems globally.\n7. Interview Strategy Besides system design principles, the book also offers strategies for approaching interviews, including communication tips, collaboration with interviewers, and the importance of justifying your design choices.\nIn summary, \u0026ldquo;System Design Interview – An Insider’s Guide\u0026rdquo; provides readers with practical tools and frameworks to approach system design interviews with confidence. It is a comprehensive guide that covers both the technical and strategic aspects of designing large-scale systems.\n","permalink":"https://blog.rulyotano.com/books/system-design-1/","summary":"\u003cp\u003e\u003cstrong\u003e\u0026ldquo;System Design Interview – An Insider’s Guide\u0026rdquo;\u003c/strong\u003e by Alex Xu is a highly regarded resource for software engineers and architects preparing for technical interviews, particularly those focused on system design. The book provides practical and structured approaches to solving complex system design problems, which are commonly part of interviews at top tech companies.\u003c/p\u003e\n\u003cp\u003eHere’s a summary of key aspects of the book:\u003c/p\u003e\n\u003ch3 id=\"1-structured-approach-to-system-design-problems\"\u003e1. \u003cstrong\u003eStructured Approach to System Design Problems\u003c/strong\u003e\u003c/h3\u003e\n\u003cp\u003eThe book introduces a clear and systematic way to tackle system design questions during interviews. It emphasizes a methodical approach, which typically includes:\u003c/p\u003e","title":"System Design Interview - An insider's guide"},{"content":"Book \u0026ldquo;Refactoring\u0026rdquo; web\nCatalog of Refactoring\nRefactoring: Improving the Design of Existing Code, written by Martin Fowler, is a highly regarded book in the field of software development. It introduces the concept of refactoring, which is the process of restructuring existing code without changing its external behavior to improve its readability, maintainability, and performance. Here’s a brief summary of the book’s main points:\n1. Understanding Refactoring Fowler defines refactoring as a disciplined technique for restructuring existing code without changing its observable behavior. The goal is to make the code more maintainable, readable, and adaptable to future requirements. Refactoring is incremental, involving small, manageable changes. This minimizes risks and allows for gradual improvement over time. 2. The Purpose and Benefits of Refactoring As code is modified over time, it tends to accumulate \u0026ldquo;technical debt\u0026rdquo;—problems that arise when changes are made quickly or without proper planning. Refactoring helps reduce technical debt, making the codebase easier to understand and work with. Key benefits include: Improved readability: Well-refactored code is easier for both the original developer and others to understand. Enhanced maintainability: Clean, well-structured code is easier to modify and update, reducing the risk of introducing bugs. Improved performance: Though not always the primary goal, refactoring can sometimes lead to more efficient code. Ease of extension: Well-structured code can be extended with new features more easily and with less risk. 3. Code Smells: Identifying Refactoring Opportunities Fowler introduces code smells, which are indicators that code needs refactoring. Recognizing these patterns helps developers pinpoint areas of the code that could benefit from improvement. Here’s an expanded list of common code smells:\n1. Duplicated Code Problem: The same or similar code appears in multiple places, which can lead to inconsistencies and higher maintenance costs. Solution: Extract the common code into a single method or class. 2. Long Method Problem: Methods that contain too much code, making them difficult to understand. Solution: Break the method into smaller, more focused methods (e.g., using Extract Method). 3. Large Class Problem: Classes that have too many responsibilities or are handling too many tasks. Solution: Break the class into smaller, more specialized classes (e.g., Extract Class). 4. Long Parameter List Problem: Methods with too many parameters, which can make the code difficult to read and understand. Solution: Group related parameters into a single object (e.g., Introduce Parameter Object). 5. Divergent Change Problem: A class that frequently changes for multiple reasons, indicating it might be handling too many responsibilities. Solution: Split the class into smaller classes, each with a more focused responsibility (e.g., Extract Class). 6. Shotgun Surgery Problem: A single change impacts multiple classes, indicating poor encapsulation. Solution: Consolidate the code in a single class, so changes are localized. 7. Feature Envy Problem: A method in one class is more interested in the data of another class. Solution: Move the method to the class it is most interested in (e.g., Move Method). 8. Data Clumps Problem: Groups of data that frequently appear together in multiple places, suggesting they should be encapsulated. Solution: Encapsulate these data items into a single class or data structure. 9. Primitive Obsession Problem: Over-reliance on primitive types (e.g., strings, integers) instead of more meaningful data structures. Solution: Replace primitives with small classes that encapsulate the specific data and provide related methods. 10. Switch Statements (Conditional Complexity) Problem: Complex conditional logic or multiple switch statements that may lead to duplicated code. Solution: Use polymorphism, encapsulating each condition into its own class. 11. Parallel Inheritance Hierarchies Problem: The need to create corresponding subclasses in multiple hierarchies simultaneously. Solution: Redesign the classes to eliminate the parallel hierarchy. 12. Lazy Class Problem: Classes that don’t do enough to justify their existence, usually after refactoring has moved functionality elsewhere. Solution: Inline the class into its parent class or remove it entirely. 13. Speculative Generality Problem: Code that includes generalizations not currently needed but added just in case they might be useful. Solution: Remove these unnecessary generalizations to simplify the code. 14. Temporary Field Problem: Fields that are only used occasionally, adding unnecessary complexity to the class. Solution: Move these fields to methods or classes where they are relevant or introduce an appropriate pattern to manage them. 15. Message Chains Problem: A chain of method calls that navigate through several objects to get data or functionality. Solution: Use a Hide Delegate refactoring to encapsulate the chain of calls, making it easier to maintain. 16. Middle Man Problem: A class that delegates too many responsibilities to other classes, serving only as a pass-through. Solution: Remove the middleman by having clients communicate directly with the real class. 17. Inappropriate Intimacy Problem: Two classes that are overly familiar with each other’s internal details, leading to tight coupling. Solution: Reduce the coupling by moving some of the responsibilities to another class or using design patterns like Mediator or Facade. 18. Alternative Classes with Different Interfaces Problem: Two classes that perform similar tasks but have different interfaces, leading to confusion and inconsistency. Solution: Unify the interfaces of the classes or use inheritance or an interface to standardize access. 4. Catalog of Refactoring Techniques Fowler provides an extensive catalog of refactoring techniques, each explained with examples and step-by-step instructions. Key techniques include: Extract Method Inline Method Rename Method or Variable Move Method Introduce Parameter Object Replace Temp with Query Encapsulate Field Decompose Conditional Replace Magic Number with Constant Replace Conditional with Polymorphism 5. Refactoring Patterns and Strategies Refactorings are organized into categories based on the purpose they serve, such as Composing Methods, Moving Features Between Objects, Organizing Data, Simplifying Conditional Expressions, Making Method Calls Simpler, and Dealing with Generalization. 6. Testing During Refactoring Fowler emphasizes testing as essential for refactoring, advocating for automated unit tests that verify behavior before, during, and after refactoring. 7. Refactoring in an Agile Environment In Agile projects, continuous refactoring is encouraged, allowing the code to evolve over time in response to changing requirements. By making it a regular practice, developers can prevent code decay. 8. Refactoring Legacy Code Fowler provides strategies for handling legacy code, such as using Characterization Tests to document and understand the existing behavior of code before refactoring. 9. The Economics of Refactoring Fowler argues that refactoring is a valuable investment. Cleaner code leads to better maintainability, less time spent on debugging, and a higher-quality product in the long run. Conclusion Refactoring: Improving the Design of Existing Code is a foundational text in software development. Fowler\u0026rsquo;s insights and detailed techniques are essential for any developer who aims to write clean, sustainable code that is adaptable to the ever-evolving demands of software. The book emphasizes that refactoring is an ongoing process that, when combined with strong testing practices, can dramatically improve the quality and maintainability of a codebase.\n","permalink":"https://blog.rulyotano.com/books/refactoring-existing-code/","summary":"\u003cp\u003e\u003ca href=\"https://refactoring.com/\"\u003eBook \u0026ldquo;Refactoring\u0026rdquo; web\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://refactoring.com/catalog/\"\u003eCatalog of Refactoring\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e\u003cem\u003eRefactoring: Improving the Design of Existing Code\u003c/em\u003e, written by Martin Fowler, is a highly regarded book in the field of software development. It introduces the concept of \u003cem\u003erefactoring\u003c/em\u003e, which is the process of restructuring existing code without changing its external behavior to improve its readability, maintainability, and performance. Here’s a brief summary of the book’s main points:\u003c/p\u003e\n\u003ch3 id=\"1-understanding-refactoring\"\u003e1. \u003cstrong\u003eUnderstanding Refactoring\u003c/strong\u003e\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eFowler defines refactoring as a disciplined technique for restructuring existing code without changing its observable behavior. The goal is to make the code more maintainable, readable, and adaptable to future requirements.\u003c/li\u003e\n\u003cli\u003eRefactoring is incremental, involving small, manageable changes. This minimizes risks and allows for gradual improvement over time.\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3 id=\"2-the-purpose-and-benefits-of-refactoring\"\u003e2. \u003cstrong\u003eThe Purpose and Benefits of Refactoring\u003c/strong\u003e\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eAs code is modified over time, it tends to accumulate \u0026ldquo;technical debt\u0026rdquo;—problems that arise when changes are made quickly or without proper planning.\u003c/li\u003e\n\u003cli\u003eRefactoring helps reduce technical debt, making the codebase easier to understand and work with.\u003c/li\u003e\n\u003cli\u003eKey benefits include:\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eImproved readability:\u003c/strong\u003e Well-refactored code is easier for both the original developer and others to understand.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eEnhanced maintainability:\u003c/strong\u003e Clean, well-structured code is easier to modify and update, reducing the risk of introducing bugs.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eImproved performance:\u003c/strong\u003e Though not always the primary goal, refactoring can sometimes lead to more efficient code.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eEase of extension:\u003c/strong\u003e Well-structured code can be extended with new features more easily and with less risk.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3 id=\"3-code-smells-identifying-refactoring-opportunities\"\u003e3. \u003cstrong\u003eCode Smells: Identifying Refactoring Opportunities\u003c/strong\u003e\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eFowler introduces \u003cem\u003ecode smells\u003c/em\u003e, which are indicators that code needs refactoring. Recognizing these patterns helps developers pinpoint areas of the code that could benefit from improvement.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eHere’s an expanded list of common code smells:\u003c/p\u003e","title":"Refactoring: Improving the Design of Existing Code"},{"content":"I want to share my experience in improving our service latency and the steps I\u0026rsquo;ve taken to get there. As a result, the P75 Latency went down from more than 100 milliseconds to less than 5! By de-normalizing the SQL queries and creating one specific cache to fit our needs.\nSituation Our service is a read-heavy system, with a ratio of 99-1 reads over the writes Created in dotnet core, we use EF core as ORM (Object Relational Mapper), and our main DB is Postgres (relational DB) We used Redis for caching. But it was not a custom cache but yes a \u0026ldquo;general propose\u0026rdquo; one. We used a library that acted as a middleware between EF and the DB, named the Second-Level Cache. Very high latency (P75 ~ 140ms and P99 ~ 4s) with high peaks of more than 2 seconds! Frequent incidents! A lot of DB connections were created when the cache was cold, so reached the Postgres connection limit, and new pods were not created, even if existing ones were in a bad state. Task My task here was to investigate the situation and find out what could be wrong, trying to pay special attention to the second-level cache since we already suspected that the problem could be there.\nActions 1) Validate our hypothesis: guilty Second Level Cache library To achieve this I ran some performance tests comparing our endpoint\u0026rsquo;s performance with and without the Second Level Cache library.\nI created 2 testing environments, with and without the cache. Then I wrote down a couple of load tests using the K6 library and with the help of this repo ( xk6-output-influxdb), I created meaningful charts that helped to support the results.\nThe next image is a good sample of the results I\u0026rsquo;ve found:\nEach line is a different query (with different arguments) made to our service.\nAt the left: are two consecutive runs to the service that didn\u0026rsquo;t have the cache. We can see higher response times but the lines are stable and predictable.\nAt the right: are two consecutive runs to the service with the second-level cache. The interesting part is at the first run (with cold cache). Initially, the response times were too high, and many requests failed. Then things improved at the second run (with warm cache). Everything went right and the maximum request time was almost half of the run without cache.\nDone! We have validated our hypothesis and demonstrated that, even having better performance when the cache is warmed, when the cache is cold we could have very high latency and unexpected results. Now we needed to find an alternative solution\n2) Finding a solution: Create a custom cache We needed to remove/replace the Second-Level Cache, so the new cache must be between our Services Layer and our DB. To create an effective cache, we needed to change almost all of our queries! So I created what I\u0026rsquo;ve called \u0026ldquo;De-normalize\u0026rdquo; the queries (which in other words is just removing the joins). By doing this we were able to create a more efficient cache since we could reuse the same cache items across several API requests.\nFor example, this was one of the old queries we used to have:\nSELECT s1.\u0026#34;Id\u0026#34;, s1.\u0026#34;AddressExternalId\u0026#34;, s1.\u0026#34;CreatedAt\u0026#34;, s1.\u0026#34;CreatedBy\u0026#34;, s1.\u0026#34;FacilityAddressId\u0026#34;, s1.\u0026#34;FacilityId\u0026#34;, s1.\u0026#34;LastModifiedAt\u0026#34;, s1.\u0026#34;LastModifiedBy\u0026#34;, s1.\u0026#34;OccurredAt\u0026#34;, s1.\u0026#34;PlanId\u0026#34;, s1.\u0026#34;SubjectId\u0026#34;, t.\u0026#34;Id\u0026#34;, t.\u0026#34;Id0\u0026#34; FROM ( SELECT s.\u0026#34;Id\u0026#34;, s0.\u0026#34;Id\u0026#34; AS \u0026#34;Id0\u0026#34; FROM \u0026#34;SubjectExternals\u0026#34; AS s INNER JOIN \u0026#34;Subjects\u0026#34; AS s0 ON s.\u0026#34;SubjectId\u0026#34; = s0.\u0026#34;Id\u0026#34; WHERE s.\u0026#34;ExternalId\u0026#34; = @__externalId_0 AND s.\u0026#34;SubjectType_Id\u0026#34; = @__type_Id_1 LIMIT 1) AS t INNER JOIN \u0026#34;SubjectAddresses\u0026#34; AS s1 ON t.\u0026#34;Id0\u0026#34; = s1.\u0026#34;SubjectId\u0026#34; ORDER BY t.\u0026#34;Id\u0026#34;, t.\u0026#34;Id0\u0026#34; Here we are just getting the complete address from one specific subject. It looks ugly, right? This query is auto-generated by our ORM and if we see the query plan in one prod DB we see this is a very expensive one.\nSort (cost=1726.96..1726.97 rows=3 width=137) Sort Key: s.\u0026#34;Id\u0026#34;, s1.\u0026#34;SubjectId\u0026#34; -\u0026gt; Nested Loop (cost=5.27..1726.94 rows=3 width=137) -\u0026gt; Limit (cost=0.83..1710.78 rows=1 width=32) -\u0026gt; Nested Loop (cost=0.83..1710.78 rows=1 width=32) -\u0026gt; Index Scan using \u0026#34;IX_SubjectExternals_ExternalType_ExternalId\u0026#34; on \u0026#34;SubjectExternals\u0026#34; s (cost=0.41..1702.35 rows=1 width=32) Index Cond: ((\u0026#34;ExternalId\u0026#34;)::text = \u0026#39;69793\u0026#39;::text) Filter: (\u0026#34;SubjectType_Id\u0026#34; = 1) -\u0026gt; Index Only Scan using \u0026#34;PK_Subjects\u0026#34; on \u0026#34;Subjects\u0026#34; s0 (cost=0.41..8.43 rows=1 width=16) Index Cond: (\u0026#34;Id\u0026#34; = s.\u0026#34;SubjectId\u0026#34;) -\u0026gt; Bitmap Heap Scan on \u0026#34;SubjectAddresses\u0026#34; s1 (cost=4.44..16.12 rows=3 width=105) Recheck Cond: (\u0026#34;SubjectId\u0026#34; = s0.\u0026#34;Id\u0026#34;) -\u0026gt; Bitmap Index Scan on \u0026#34;IX_SubjectAddresses_SubjectId\u0026#34; (cost=0.00..4.44 rows=3 width=0) Index Cond: (\u0026#34;SubjectId\u0026#34; = s0.\u0026#34;Id\u0026#34;) So this would be replaced by these 2 simpler queries:\nSELECT s.\u0026#34;SubjectId\u0026#34; FROM \u0026#34;SubjectExternals\u0026#34; AS s WHERE s.\u0026#34;ExternalId\u0026#34; = @__externalId_0 AND s.\u0026#34;SubjectType_Id\u0026#34; = @__type_Id_1 LIMIT 1 and\nSELECT s.\u0026#34;Id\u0026#34;, s.\u0026#34;AddressExternalId\u0026#34; AS \u0026#34;ExternalId\u0026#34;, s.\u0026#34;PlanId\u0026#34;, s.\u0026#34;FacilityId\u0026#34;, s.\u0026#34;FacilityAddressId\u0026#34;, s.\u0026#34;OccurredAt\u0026#34; FROM \u0026#34;SubjectAddresses\u0026#34; AS s WHERE s.\u0026#34;SubjectId\u0026#34; = @__subjectId_0 After doing these changes, not only simpler queries are faster, but also it is way easier to create the cache which will be reused by all the API requests.\nSo I did an analysis of all the API endpoints we had and the information they needed and I got the conclusion we only needed 5 of these canonical queries.\nThen I created a new query service to be reused across all the endpoints with these 5 methods to wrap these queries. And here in this new service is where the cache will be. A very simple cache-aside implementation (if the key is in the cache return the cached value, and if it isn\u0026rsquo;t do the query and add it).\nCache invalidation is a key factor to consider. Initially, I only considered using a very short TTL (5 minutes) and in a second we changed to a longer TTL (4 days).\nI didn\u0026rsquo;t implement the whole thing for the performance tests that I was doing. I just picked up one API endpoint and changed only the queries that were performed there. The goal was to make a point and validate the new solution.\nThe results of these tests were amazing! A very strong win for the new custom cache implementation. Most of the requests made to the service with the 2nd Level Cache failed, while the service with Custom Cache was able to handle all of them!\nAnd here are some of the results in numbers (take into account these tests ran against testing environments, but both had the same exact config)\n| **http_req_duration** | avg | min | med | max | p(90) | p(95) | |-----------------------|-----------|-------------|-----------|------------|-----------|------------| | 2nd Level Cache | 16.25s | 0s | 4.31s | 1m0s | 59.92s | 59.95s | | **Custom Cache** | **5.21s** | **21.56ms** | **5.62s** | **23.74s** | **9.43s** | **10.53s** | | **http_req_failed** | Failure % | ✗ Error | ✓ Success | |---------------------|-----------|---------|-----------| | 2nd Level Cache | 44.72% | 1649 | 2038 | | **Custom Cache** | **1.18%** | **136** | **11301** | 3) Get it done! Finally, I got what I was looking for: found the root cause of the performance issues and an alternative solution proved to work much better. The next step was to share this with my team, so I created a document explaining the results and the new proposal and scheduled a presentation.\nAfter gathering some useful feedback I created the roadmap and tasks to implement it, divided into smaller steps. We were able to replace the whole thing in just a couple of weeks.\nResult Incidents were gone forever. We have never experienced a performance issue that caused an incident anymore. The main team\u0026rsquo;s goal for the whole quarter was achieved in just a few weeks! One of our team\u0026rsquo;s OITs was getting P75 down to 80ms or less and we achieved less than 10ms. Money saved! We needed fewer processing efforts and fewer pods, we downgraded the layer for the DB from medium to small, and we needed less Redis memory. Open the door to more teams willing to integrate with our service. Metrics improved: ","permalink":"https://blog.rulyotano.com/blog/article/improve-service-latency/","summary":"\u003cp\u003eI want to share my experience in improving our service latency and the steps I\u0026rsquo;ve taken to get there. As a result, the P75 Latency went down from more than 100 milliseconds to less than 5! By de-normalizing the SQL queries and creating one specific cache to fit our needs.\u003c/p\u003e\n\u003ch2 id=\"situation\"\u003eSituation\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eOur service is a read-heavy system, with a ratio of 99-1 reads over the writes\u003c/li\u003e\n\u003cli\u003eCreated in dotnet core, we use EF core as ORM (Object Relational Mapper), and our main DB is Postgres (relational DB)\u003c/li\u003e\n\u003cli\u003eWe used Redis for caching. But it was not a custom cache but yes a \u0026ldquo;general propose\u0026rdquo; one. We used a library that acted as a middleware between EF and the DB, named the Second-Level Cache.\u003c/li\u003e\n\u003cli\u003eVery high latency (\u003cstrong\u003eP75 ~ 140ms\u003c/strong\u003e and \u003cstrong\u003eP99 ~ 4s\u003c/strong\u003e) with high peaks of more than 2 seconds!\u003c/li\u003e\n\u003cli\u003eFrequent incidents! A lot of DB connections were created when the cache was cold, so reached the Postgres connection limit, and new pods were not created, even if existing ones were in a bad state.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003e\u003cimg alt=\"Latency before\" loading=\"lazy\" src=\"/images/improving-latency-by-20x-times/oits-before.png\"\u003e\u003cimg alt=\"Redis cache hit rate before\" loading=\"lazy\" src=\"/images/improving-latency-by-20x-times/redis-before-long-ttls.png\"\u003e\u003c/p\u003e","title":"Improving latency by 20x times!"},{"content":"Cracking the Coding Interview by Gayle Laakmann McDowell is a popular resource for software engineers preparing for technical interviews. This book provides in-depth explanations, coding problems, and solutions, along with valuable insights into the interview process at major tech companies. It is designed to help candidates understand the technical skills, problem-solving techniques, and strategies necessary to succeed in coding interviews.\nDomain-Driven Design Distilled by Vaughn Vernon is a concise and practical guide to understanding and applying Domain-Driven Design (DDD). DDD is an approach to software development that emphasizes collaboration between technical experts and domain experts to create models that reflect complex business requirements. This book distills the core principles of DDD, making them accessible to both beginners and experienced practitioners.\nKey Principles and Concepts: Domain-Driven Design Overview: DDD is about aligning software design with business needs. The goal is to develop a model that captures the core domain logic and communicates effectively with stakeholders. Vernon emphasizes that DDD is most valuable for complex domains where business rules and workflows are intricate and require a deep understanding. Core Concepts in Domain-Driven Design: Domain: The problem space or area of business that the software aims to address. Model: A simplification of reality created to solve specific problems within the domain. The model represents domain concepts and their relationships. Ubiquitous Language: A shared language between developers and domain experts. This language is used consistently throughout the project to avoid misunderstandings and misinterpretations. Bounded Context: A boundary within which a particular domain model is defined and applicable. Different bounded contexts may have their own distinct models and ubiquitous languages. Strategic Design: Bounded Contexts: The concept of bounded contexts is central to strategic design in DDD. Each bounded context represents a particular part of the domain with its own model, terminology, and language. Context Mapping: Helps to identify and understand the relationships between different bounded contexts. Vernon explains patterns like Shared Kernel, Customer-Supplier, and Anti-Corruption Layer, which define how different contexts interact. Strategic Patterns: Describes strategic collaboration patterns, including Partnership, Shared Kernel, Customer-Supplier, and Conformist, that inform how different bounded contexts relate and communicate. Tactical Design Patterns: Tactical design patterns help implement the core domain logic within a bounded context. Vernon introduces and explains essential DDD building blocks, including: Entities: Objects with a distinct identity that persists over time. An example might be a Customer object with a unique ID. Value Objects: Immutable objects that describe domain aspects but lack identity. They are defined by their attributes, like Money or Date. Aggregates: Clusters of entities and value objects that are treated as a single unit of consistency. Aggregates define transactional boundaries and enforce invariants. Repositories: Abstractions for accessing and managing aggregates. Repositories provide an interface to store and retrieve domain objects, often from a database. Domain Services: Services that encapsulate domain logic that doesn\u0026rsquo;t naturally belong within entities or value objects. They focus on domain operations, often requiring actions across multiple aggregates. The Importance of Ubiquitous Language: Ubiquitous language is the foundation of DDD. It ensures that all stakeholders, from developers to business experts, are on the same page by using consistent terminology. The language evolves alongside the domain model and is reflected in the code, documentation, and discussions. Aggregates and Consistency: Aggregates define transactional boundaries and should enforce consistency rules within those boundaries. For example, an Order aggregate might include OrderItems that must add up to a specific total. Vernon explains how aggregates should be designed to avoid complexity and prevent issues like race conditions and concurrency conflicts. Domain Events: Domain events represent significant occurrences within the domain, such as OrderPlaced or PaymentProcessed. They allow for decoupling different parts of the system by notifying other components of changes without direct dependencies. Domain events facilitate an event-driven approach to building software, where events can trigger actions in other bounded contexts or systems. Implementing DDD with Modern Software Development Practices: The book emphasizes the importance of using modern architectural patterns, like microservices and event sourcing, to support the principles of DDD. Vernon discusses how bounded contexts align with microservices, as each microservice can represent a specific bounded context. This modularity makes it easier to scale and maintain complex applications. Event sourcing is presented as an option for capturing domain events as a source of truth, making it possible to replay events and understand state changes over time. Refactoring and Evolving the Model: DDD is not a one-time effort. Vernon highlights the need for continuous refactoring and improvement of the domain model. Regular refactoring allows the model to evolve in response to new business requirements and insights. This iterative process ensures the software remains aligned with the business over time. DDD and Collaboration: Collaboration between developers and domain experts is essential in DDD. Vernon emphasizes techniques like Event Storming and collaborative modeling workshops to facilitate this collaboration. By working closely with domain experts, developers can uncover nuances in the business logic and build software that better serves the organization. ","permalink":"https://blog.rulyotano.com/books/domain-driven-design/","summary":"\u003cp\u003e\u003cem\u003eCracking the Coding Interview\u003c/em\u003e by Gayle Laakmann McDowell is a popular resource for software engineers preparing for technical interviews. This book provides in-depth explanations, coding problems, and solutions, along with valuable insights into the interview process at major tech companies. It is designed to help candidates understand the technical skills, problem-solving techniques, and strategies necessary to succeed in coding interviews.\u003c/p\u003e\n\u003cp\u003e\u003cem\u003eDomain-Driven Design Distilled\u003c/em\u003e by Vaughn Vernon is a concise and practical guide to understanding and applying Domain-Driven Design (DDD). DDD is an approach to software development that emphasizes collaboration between technical experts and domain experts to create models that reflect complex business requirements. This book distills the core principles of DDD, making them accessible to both beginners and experienced practitioners.\u003c/p\u003e","title":"Domain-Driven Design Distilled "},{"content":"Original Book Website\nCracking the Coding Interview by Gayle Laakmann McDowell is a popular resource for software engineers preparing for technical interviews. This book provides in-depth explanations, coding problems, and solutions, along with valuable insights into the interview process at major tech companies. It is designed to help candidates understand the technical skills, problem-solving techniques, and strategies necessary to succeed in coding interviews.\nKey Sections and Concepts: The Interview Process: McDowell begins by demystifying the tech interview process, explaining what companies like Google, Facebook, and Amazon look for in candidates. Covers different types of interviews (phone screens, on-site interviews, behavioral interviews, and coding interviews) and what to expect from each. Offers advice on how to handle non-technical questions, communicate effectively, and demonstrate problem-solving skills during interviews. Technical Foundations and Key Programming Concepts: Before diving into the coding problems, the book provides a refresher on essential computer science concepts that are frequently tested in interviews. Topics include data structures (arrays, linked lists, stacks, queues, trees, graphs, hash tables), algorithms (sorting, searching, recursion), and time/space complexity. McDowell explains Big-O notation and how to analyze the efficiency of algorithms, which is a key skill in optimizing code and performing well in technical interviews. Problem-Solving Techniques: McDowell presents problem-solving strategies to help candidates approach and break down complex coding problems. Techniques include using examples, simplifying problems, breaking problems into smaller components, and practicing “brute-force” solutions before optimizing them. Encourages practicing common coding patterns, understanding problem requirements, and testing solutions with different inputs. 189 Programming Questions and Solutions: The core of the book consists of 189 coding questions covering various topics and difficulty levels, complete with detailed solutions and explanations. Questions are categorized by topics such as: Arrays and Strings: Basics like string manipulation, array rotations, and character counts. Linked Lists: Merging, reversing, finding cycles, and removing duplicates. Stacks and Queues: Implementing stacks/queues, using them in algorithms, and solving problems like balanced parentheses. Trees and Graphs: Traversals, finding nodes, balancing trees, and understanding graph-based problems. Bit Manipulation: Working with binary representation, bit shifting, and XOR operations. Math and Logic Puzzles: Number theory, combinatorial problems, and logic-based challenges. Sorting and Searching: Implementing algorithms like quicksort, mergesort, binary search, and solving related problems. Dynamic Programming and Recursion: Using memoization, solving classic problems like Fibonacci, and exploring complex recursive solutions. Each problem includes hints, thorough explanations of the solution, and alternative solutions to encourage deeper understanding. Behavioral Questions and Soft Skills: In addition to technical questions, the book provides guidance on handling behavioral questions, which are often overlooked in technical interview prep. It covers common behavioral interview questions, such as discussing past projects, handling challenges, and explaining career goals. McDowell emphasizes the importance of demonstrating a collaborative attitude, adaptability, and passion for technology, which can impact an interviewer’s impression. Additional Interview Tips: The book shares tips on preparing a strong resume and provides sample resume templates to highlight key skills and achievements. Offers advice on handling unexpected interview scenarios, such as not knowing the answer to a question, and how to ask clarifying questions to better understand the problem. Emphasizes the importance of practicing mock interviews, either with a peer or online, to build confidence and get used to thinking out loud. Understanding the Interviewer\u0026rsquo;s Perspective: McDowell offers insights into what interviewers look for during technical interviews and how they evaluate candidates based on problem-solving approaches, code quality, and communication skills. She explains that interviewers value clarity, logical thinking, and the ability to communicate and discuss the code as it is being written. Conclusion: Cracking the Coding Interview is a comprehensive resource for anyone preparing for software engineering interviews, particularly for roles at top tech companies. It provides an in-depth look at technical and behavioral interview strategies, as well as a wealth of coding problems that cover key data structures and algorithms. By focusing on problem-solving skills, technical foundations, and effective communication, this book equips candidates with the tools they need to tackle challenging interview questions with confidence.\n","permalink":"https://blog.rulyotano.com/books/cracking-coding-interviews/","summary":"\u003cp\u003e\u003ca href=\"https://www.crackingthecodinginterview.com/\"\u003eOriginal Book Website\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e\u003cem\u003eCracking the Coding Interview\u003c/em\u003e by Gayle Laakmann McDowell is a popular resource for software engineers preparing for technical interviews. This book provides in-depth explanations, coding problems, and solutions, along with valuable insights into the interview process at major tech companies. It is designed to help candidates understand the technical skills, problem-solving techniques, and strategies necessary to succeed in coding interviews.\u003c/p\u003e\n\u003ch4 id=\"key-sections-and-concepts\"\u003e\u003cstrong\u003eKey Sections and Concepts:\u003c/strong\u003e\u003c/h4\u003e\n\u003col\u003e\n\u003cli\u003e\u003cstrong\u003eThe Interview Process:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eMcDowell begins by demystifying the tech interview process, explaining what companies like Google, Facebook, and Amazon look for in candidates.\u003c/li\u003e\n\u003cli\u003eCovers different types of interviews (phone screens, on-site interviews, behavioral interviews, and coding interviews) and what to expect from each.\u003c/li\u003e\n\u003cli\u003eOffers advice on how to handle non-technical questions, communicate effectively, and demonstrate problem-solving skills during interviews.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eTechnical Foundations and Key Programming Concepts:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eBefore diving into the coding problems, the book provides a refresher on essential computer science concepts that are frequently tested in interviews.\u003c/li\u003e\n\u003cli\u003eTopics include data structures (arrays, linked lists, stacks, queues, trees, graphs, hash tables), algorithms (sorting, searching, recursion), and time/space complexity.\u003c/li\u003e\n\u003cli\u003eMcDowell explains Big-O notation and how to analyze the efficiency of algorithms, which is a key skill in optimizing code and performing well in technical interviews.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eProblem-Solving Techniques:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eMcDowell presents problem-solving strategies to help candidates approach and break down complex coding problems.\u003c/li\u003e\n\u003cli\u003eTechniques include using examples, simplifying problems, breaking problems into smaller components, and practicing “brute-force” solutions before optimizing them.\u003c/li\u003e\n\u003cli\u003eEncourages practicing common coding patterns, understanding problem requirements, and testing solutions with different inputs.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e189 Programming Questions and Solutions:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eThe core of the book consists of 189 coding questions covering various topics and difficulty levels, complete with detailed solutions and explanations.\u003c/li\u003e\n\u003cli\u003eQuestions are categorized by topics such as:\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eArrays and Strings:\u003c/strong\u003e Basics like string manipulation, array rotations, and character counts.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eLinked Lists:\u003c/strong\u003e Merging, reversing, finding cycles, and removing duplicates.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eStacks and Queues:\u003c/strong\u003e Implementing stacks/queues, using them in algorithms, and solving problems like balanced parentheses.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eTrees and Graphs:\u003c/strong\u003e Traversals, finding nodes, balancing trees, and understanding graph-based problems.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eBit Manipulation:\u003c/strong\u003e Working with binary representation, bit shifting, and XOR operations.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eMath and Logic Puzzles:\u003c/strong\u003e Number theory, combinatorial problems, and logic-based challenges.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eSorting and Searching:\u003c/strong\u003e Implementing algorithms like quicksort, mergesort, binary search, and solving related problems.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eDynamic Programming and Recursion:\u003c/strong\u003e Using memoization, solving classic problems like Fibonacci, and exploring complex recursive solutions.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003eEach problem includes hints, thorough explanations of the solution, and alternative solutions to encourage deeper understanding.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eBehavioral Questions and Soft Skills:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eIn addition to technical questions, the book provides guidance on handling behavioral questions, which are often overlooked in technical interview prep.\u003c/li\u003e\n\u003cli\u003eIt covers common behavioral interview questions, such as discussing past projects, handling challenges, and explaining career goals.\u003c/li\u003e\n\u003cli\u003eMcDowell emphasizes the importance of demonstrating a collaborative attitude, adaptability, and passion for technology, which can impact an interviewer’s impression.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eAdditional Interview Tips:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eThe book shares tips on preparing a strong resume and provides sample resume templates to highlight key skills and achievements.\u003c/li\u003e\n\u003cli\u003eOffers advice on handling unexpected interview scenarios, such as not knowing the answer to a question, and how to ask clarifying questions to better understand the problem.\u003c/li\u003e\n\u003cli\u003eEmphasizes the importance of practicing mock interviews, either with a peer or online, to build confidence and get used to thinking out loud.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eUnderstanding the Interviewer\u0026rsquo;s Perspective:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eMcDowell offers insights into what interviewers look for during technical interviews and how they evaluate candidates based on problem-solving approaches, code quality, and communication skills.\u003c/li\u003e\n\u003cli\u003eShe explains that interviewers value clarity, logical thinking, and the ability to communicate and discuss the code as it is being written.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch4 id=\"conclusion\"\u003e\u003cstrong\u003eConclusion:\u003c/strong\u003e\u003c/h4\u003e\n\u003cp\u003e\u003cem\u003eCracking the Coding Interview\u003c/em\u003e is a comprehensive resource for anyone preparing for software engineering interviews, particularly for roles at top tech companies. It provides an in-depth look at technical and behavioral interview strategies, as well as a wealth of coding problems that cover key data structures and algorithms. By focusing on problem-solving skills, technical foundations, and effective communication, this book equips candidates with the tools they need to tackle challenging interview questions with confidence.\u003c/p\u003e","title":"Cracking the Coding Interview: 189 Programming Questions and Solutions"},{"content":"Online Book\n.NET Microservices: Architecture for Containerized .NET Applications is a guide developed by Microsoft to help architects and developers design, build, and deploy microservices-based applications using .NET Core and container technologies. The book provides practical advice and patterns for creating cloud-native applications, with a focus on leveraging containers, Kubernetes, and Docker.\nKey Themes and Concepts: Microservices Architecture Overview: Microservices architecture is a design approach where applications are built as a collection of small, independent services. Each service focuses on a specific business capability, allowing greater agility, scalability, and maintainability. This architecture contrasts with monolithic applications, which are tightly integrated and difficult to scale and update independently. Benefits of Microservices: Scalability: Microservices can be scaled independently, allowing efficient resource allocation based on demand. Deployment Flexibility: Microservices enable independent deployment, which reduces risk and minimizes downtime. Resilience: Failure of a single service is less likely to bring down the entire system, which increases overall system reliability. Technology Freedom: Different services can use different technologies, frameworks, or databases based on their specific needs. Designing Microservices with .NET Core: The book focuses on using .NET Core to develop microservices because of its performance, cross-platform support, and compatibility with containers. Emphasizes DDD (Domain-Driven Design) principles to guide the design of microservices, identifying bounded contexts, and aligning services with business domains. Containers and Docker: Containers provide a lightweight way to package applications and their dependencies, making them portable and consistent across environments. The book explains how Docker is used to build, deploy, and manage containers. It also describes how to create Docker images for .NET applications and optimize them for faster deployments and smaller sizes. Covers best practices for organizing Dockerfiles, managing layers, and handling dependencies to improve performance and manageability. Orchestrating Microservices with Kubernetes: Kubernetes is a container orchestration platform that manages deployment, scaling, and operations of containerized applications across a cluster of machines. The book describes how Kubernetes helps manage complex microservices architectures, handle load balancing, and recover from failures. It also introduces basic Kubernetes concepts such as pods, services, deployments, and namespaces. Service Communication and API Gateways: Microservices communicate with each other over the network, commonly using HTTP/REST, gRPC, or message brokers like RabbitMQ. An API Gateway can be used to simplify and manage client interactions with multiple services, acting as a reverse proxy and handling tasks like request routing, composition, and cross-cutting concerns (e.g., authentication, logging). Discusses Ocelot as a recommended API Gateway for .NET applications. Data Management in Microservices: Each microservice should manage its own data to maintain loose coupling. This often results in a polyglot persistence approach where each service may use the best database for its needs. The book explains different data management patterns, including: Database per Service: Each microservice has its own database, ensuring autonomy and reducing dependencies. Event Sourcing and CQRS (Command Query Responsibility Segregation): Patterns to separate read and write operations, allowing different models for better performance and scalability. Distributed Transactions and Saga Pattern: Techniques for managing transactions across multiple services in a distributed system. Event-Driven Communication: Event-driven architectures enable microservices to communicate asynchronously using events rather than direct service-to-service calls. The book discusses event-based messaging using message brokers like RabbitMQ, Azure Service Bus, or Kafka, which can improve system decoupling and resilience. Event-driven communication is essential for implementing eventual consistency and handling long-running business processes. Cross-Cutting Concerns: Microservices require solutions for common tasks such as logging, monitoring, security, and configuration management. The book discusses using tools like Azure Monitor, Application Insights, and centralized logging with Elasticsearch, Fluentd, and Kibana (EFK) stack. Also covers configuration management with tools like Azure Key Vault or HashiCorp Vault for secure and centralized configuration. Microservices Deployment and DevOps: DevOps practices are crucial for microservices, as they enable automated, continuous integration and deployment (CI/CD) pipelines. The book explains how to set up CI/CD for microservices using tools like Azure DevOps and GitHub Actions, focusing on automating builds, tests, and deployments. Introduces concepts like rolling deployments, blue-green deployments, and canary releases to reduce risk during production updates. Security Considerations: Security is vital in a microservices architecture. The book discusses securing microservices with authentication and authorization mechanisms, including OAuth2 and OpenID Connect. Covers how to use tools like Azure Active Directory or IdentityServer4 to secure microservices and manage user identities. Conclusion: .NET Microservices: Architecture for Containerized .NET Applications provides a comprehensive guide to building microservices-based applications with .NET and Docker. It covers key principles and best practices, from design and development to deployment and monitoring. The book emphasizes the importance of independence and scalability, using containers and Kubernetes to manage and orchestrate services. By following this guide, developers can build robust, maintainable, and cloud-native microservices architectures in the .NET ecosystem.\n","permalink":"https://blog.rulyotano.com/books/dotnet-microservices/","summary":"\u003cp\u003e\u003ca href=\"https://learn.microsoft.com/en-us/dotnet/architecture/microservices/\"\u003eOnline Book\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e\u003cem\u003e.NET Microservices: Architecture for Containerized .NET Applications\u003c/em\u003e is a guide developed by Microsoft to help architects and developers design, build, and deploy microservices-based applications using .NET Core and container technologies. The book provides practical advice and patterns for creating cloud-native applications, with a focus on leveraging containers, Kubernetes, and Docker.\u003c/p\u003e\n\u003ch4 id=\"key-themes-and-concepts\"\u003e\u003cstrong\u003eKey Themes and Concepts:\u003c/strong\u003e\u003c/h4\u003e\n\u003col\u003e\n\u003cli\u003e\u003cstrong\u003eMicroservices Architecture Overview:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eMicroservices architecture is a design approach where applications are built as a collection of small, independent services. Each service focuses on a specific business capability, allowing greater agility, scalability, and maintainability.\u003c/li\u003e\n\u003cli\u003eThis architecture contrasts with monolithic applications, which are tightly integrated and difficult to scale and update independently.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eBenefits of Microservices:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eScalability:\u003c/strong\u003e Microservices can be scaled independently, allowing efficient resource allocation based on demand.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eDeployment Flexibility:\u003c/strong\u003e Microservices enable independent deployment, which reduces risk and minimizes downtime.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eResilience:\u003c/strong\u003e Failure of a single service is less likely to bring down the entire system, which increases overall system reliability.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eTechnology Freedom:\u003c/strong\u003e Different services can use different technologies, frameworks, or databases based on their specific needs.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eDesigning Microservices with .NET Core:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eThe book focuses on using .NET Core to develop microservices because of its performance, cross-platform support, and compatibility with containers.\u003c/li\u003e\n\u003cli\u003eEmphasizes DDD (Domain-Driven Design) principles to guide the design of microservices, identifying bounded contexts, and aligning services with business domains.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eContainers and Docker:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eContainers provide a lightweight way to package applications and their dependencies, making them portable and consistent across environments.\u003c/li\u003e\n\u003cli\u003eThe book explains how Docker is used to build, deploy, and manage containers. It also describes how to create Docker images for .NET applications and optimize them for faster deployments and smaller sizes.\u003c/li\u003e\n\u003cli\u003eCovers best practices for organizing Dockerfiles, managing layers, and handling dependencies to improve performance and manageability.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eOrchestrating Microservices with Kubernetes:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eKubernetes is a container orchestration platform that manages deployment, scaling, and operations of containerized applications across a cluster of machines.\u003c/li\u003e\n\u003cli\u003eThe book describes how Kubernetes helps manage complex microservices architectures, handle load balancing, and recover from failures.\u003c/li\u003e\n\u003cli\u003eIt also introduces basic Kubernetes concepts such as pods, services, deployments, and namespaces.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eService Communication and API Gateways:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eMicroservices communicate with each other over the network, commonly using HTTP/REST, gRPC, or message brokers like RabbitMQ.\u003c/li\u003e\n\u003cli\u003eAn API Gateway can be used to simplify and manage client interactions with multiple services, acting as a reverse proxy and handling tasks like request routing, composition, and cross-cutting concerns (e.g., authentication, logging).\u003c/li\u003e\n\u003cli\u003eDiscusses Ocelot as a recommended API Gateway for .NET applications.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eData Management in Microservices:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eEach microservice should manage its own data to maintain loose coupling. This often results in a polyglot persistence approach where each service may use the best database for its needs.\u003c/li\u003e\n\u003cli\u003eThe book explains different data management patterns, including:\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eDatabase per Service:\u003c/strong\u003e Each microservice has its own database, ensuring autonomy and reducing dependencies.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eEvent Sourcing and CQRS (Command Query Responsibility Segregation):\u003c/strong\u003e Patterns to separate read and write operations, allowing different models for better performance and scalability.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eDistributed Transactions and Saga Pattern:\u003c/strong\u003e Techniques for managing transactions across multiple services in a distributed system.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eEvent-Driven Communication:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eEvent-driven architectures enable microservices to communicate asynchronously using events rather than direct service-to-service calls.\u003c/li\u003e\n\u003cli\u003eThe book discusses event-based messaging using message brokers like RabbitMQ, Azure Service Bus, or Kafka, which can improve system decoupling and resilience.\u003c/li\u003e\n\u003cli\u003eEvent-driven communication is essential for implementing eventual consistency and handling long-running business processes.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eCross-Cutting Concerns:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eMicroservices require solutions for common tasks such as logging, monitoring, security, and configuration management.\u003c/li\u003e\n\u003cli\u003eThe book discusses using tools like Azure Monitor, Application Insights, and centralized logging with Elasticsearch, Fluentd, and Kibana (EFK) stack.\u003c/li\u003e\n\u003cli\u003eAlso covers configuration management with tools like Azure Key Vault or HashiCorp Vault for secure and centralized configuration.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eMicroservices Deployment and DevOps:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eDevOps practices are crucial for microservices, as they enable automated, continuous integration and deployment (CI/CD) pipelines.\u003c/li\u003e\n\u003cli\u003eThe book explains how to set up CI/CD for microservices using tools like Azure DevOps and GitHub Actions, focusing on automating builds, tests, and deployments.\u003c/li\u003e\n\u003cli\u003eIntroduces concepts like rolling deployments, blue-green deployments, and canary releases to reduce risk during production updates.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eSecurity Considerations:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eSecurity is vital in a microservices architecture. The book discusses securing microservices with authentication and authorization mechanisms, including OAuth2 and OpenID Connect.\u003c/li\u003e\n\u003cli\u003eCovers how to use tools like Azure Active Directory or IdentityServer4 to secure microservices and manage user identities.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch4 id=\"conclusion\"\u003e\u003cstrong\u003eConclusion:\u003c/strong\u003e\u003c/h4\u003e\n\u003cp\u003e\u003cem\u003e.NET Microservices: Architecture for Containerized .NET Applications\u003c/em\u003e provides a comprehensive guide to building microservices-based applications with .NET and Docker. It covers key principles and best practices, from design and development to deployment and monitoring. The book emphasizes the importance of independence and scalability, using containers and Kubernetes to manage and orchestrate services. By following this guide, developers can build robust, maintainable, and cloud-native microservices architectures in the .NET ecosystem.\u003c/p\u003e","title":".NET Microservices: Architecture for Containerized .NET Applications"},{"content":"Clean Architecture by Robert C. Martin (Uncle Bob) is a comprehensive guide on building maintainable, scalable, and flexible software systems. The book presents a set of architectural principles and patterns that help software developers design robust systems that are easy to adapt and evolve over time. It focuses on creating a strong separation of concerns and ensuring that core business logic remains independent from external factors like frameworks, databases, or user interfaces.\nKey Principles and Concepts: Architecture as a Set of Boundaries: The book emphasizes that the primary goal of software architecture is to create boundaries between different parts of the system. This ensures that changes in one area don’t ripple through the entire codebase, making the system easier to modify and scale. The core business logic should be isolated from details like frameworks, databases, and UI, so that those parts can change independently. The Importance of Use Cases: Martin stresses the need to focus on use cases (the core business rules) as the primary concern of software architecture. Everything else—databases, frameworks, interfaces—is secondary and should be interchangeable without affecting the core logic. Clean architecture ensures that use cases remain stable over time, while other details (like how data is stored or retrieved) can change easily. The Dependency Rule: The book outlines the \u0026ldquo;Dependency Rule,\u0026rdquo; which states that dependencies should always point inwards towards the core of the system. This means that inner layers should not know about or depend on outer layers, which include things like UI frameworks, databases, or external libraries. This is key to achieving independence between business logic and infrastructure details. The Layers of Clean Architecture: Entities: The innermost layer, representing core business objects and logic. This layer should be independent of any external frameworks or libraries. Use Cases: Contains application-specific business rules. It orchestrates interactions between entities but remains independent of implementation details. Interface Adapters: Converts data from the format most convenient for use cases and entities to the format required by the outer layers, like databases or UI. Frameworks and Drivers (External Parts): The outermost layer, where you find UI, databases, web frameworks, or any third-party APIs. These components are the least important and should be easily replaceable without affecting core business logic. SOLID Principles: Martin revisits the SOLID principles, emphasizing their role in maintaining clean architecture: Single Responsibility Principle (SRP): A class or module should have one, and only one, reason to change. Open/Closed Principle (OCP): Systems should be open for extension but closed for modification, ensuring flexibility without rewriting core logic. Liskov Substitution Principle (LSP): Objects should be replaceable with instances of their subtypes without altering the correctness of the program. Interface Segregation Principle (ISP): Clients should not be forced to depend on interfaces they do not use. Dependency Inversion Principle (DIP): High-level modules should not depend on low-level modules, but both should depend on abstractions. Separation of Concerns: A key aspect of clean architecture is keeping different concerns (e.g., business logic, UI, database) separated from one another. This avoids a tightly coupled system where changes in one area have unintended effects on other parts. By isolating concerns, it becomes easier to develop, maintain, and extend applications without introducing bugs or complexity. Independence from Frameworks: The book argues that frameworks should be seen as tools rather than foundations. Developers should avoid making the core system dependent on a particular framework or tool to prevent future technical debt. Frameworks should only be applied at the boundaries of the system, meaning the core business logic remains independent and unaffected by changes in external libraries. Testability: Clean architecture promotes highly testable code by isolating business rules and reducing dependencies on external factors like databases or user interfaces. Testing becomes easier because the core logic can be tested without involving infrastructure components, allowing for faster and more reliable tests. Component Boundaries and Design: Martin discusses how to design systems with clear component boundaries, where each component has a single responsibility. By breaking systems into smaller, independent parts, developers can improve maintainability and scalability, making it easier to introduce new features or handle increased system loads. Architecture in Practice: Martin emphasizes that every system will eventually face change. The best architectures are those that allow for easy modification and extension. He suggests making architecture decisions that facilitate long-term flexibility rather than relying on short-term solutions. Conclusion: Clean Architecture is a guide for developers and architects aiming to build systems that can withstand the test of time and changing requirements. It offers practical advice for creating systems where business logic is decoupled from details like frameworks, databases, or UIs, ensuring flexibility, testability, and maintainability. The emphasis on boundaries, separation of concerns, and SOLID principles equips developers with tools to build robust software that can adapt to inevitable change.\nThis book is essential for software professionals seeking to master the art of designing systems that remain clean, flexible, and scalable in the long term.\n","permalink":"https://blog.rulyotano.com/books/clean-architecture/","summary":"\u003cp\u003e\u003cem\u003eClean Architecture\u003c/em\u003e by Robert C. Martin (Uncle Bob) is a comprehensive guide on building maintainable, scalable, and flexible software systems. The book presents a set of architectural principles and patterns that help software developers design robust systems that are easy to adapt and evolve over time. It focuses on creating a strong separation of concerns and ensuring that core business logic remains independent from external factors like frameworks, databases, or user interfaces.\u003c/p\u003e","title":"Clean Architecture: A Craftsman's Guide to Software Structure and Design"},{"content":"Clean Code by Robert C. Martin, commonly referred to as \u0026ldquo;Uncle Bob,\u0026rdquo; is a foundational book in software engineering. It provides principles, best practices, and examples for writing clean, readable, and maintainable code. The book emphasizes the importance of professionalism and craftsmanship in software development and is structured around practical examples, case studies, and advice.\nKey Principles and Concepts: Meaningful Naming: Names should be descriptive and unambiguous, helping the reader understand the code\u0026rsquo;s purpose. Avoid abbreviations and aim for clarity. For example, a variable name accountBalance is better than accBal. Good names for variables, functions, and classes make the code self-explanatory. Functions: Functions should be small, perform a single task, and do it well. They should ideally be between 1-4 lines long, with each line of code contributing to the function\u0026rsquo;s purpose. Avoid passing too many parameters; keep the number of arguments low to maintain readability and testability. Comments: Martin argues that comments are often a sign of bad code. Instead, write code that is self-explanatory so that comments aren’t necessary. When comments are used, they should clarify code rather than explain what is obvious. Favor clear naming and structure over extensive commenting, as comments can become outdated and misleading over time. Error Handling: Handle errors gracefully with clear and informative messages. Use exceptions instead of return codes and avoid unnecessary try-catch blocks. Keep error-handling code separate from the main logic to improve readability and reduce clutter. Formatting and Readability: Consistent formatting makes code easier to read and navigate. Group related code together, use meaningful whitespace, and maintain a logical flow within files. Use indentation, line breaks, and consistent naming conventions to make code visually appealing and easier to scan. Single Responsibility Principle (SRP): Each class, function, or module should have one reason to change, meaning it should have a single responsibility. This principle promotes separation of concerns, making it easier to understand, test, and modify code. DRY (Don’t Repeat Yourself): Avoid duplication by refactoring common functionality into functions or classes. Repeating code leads to inconsistencies and increases the effort required to make changes. Code Smells and Refactoring: Identifying \u0026ldquo;code smells\u0026rdquo; (indicators of deeper issues) is essential to maintaining clean code. Martin emphasizes continual refactoring as a practice to improve and maintain code quality. Common smells include large classes, long methods, and too many parameters. Addressing these issues keeps code simple and more maintainable. Unit Testing and Test-Driven Development (TDD): Tests are an integral part of clean code. Writing tests ensures code functionality and simplifies future modifications. Martin advocates for TDD, where tests are written before the code to ensure that every line of code has a purpose and is testable. Good tests are fast, independent, repeatable, and cover as many edge cases as possible. Continuous Improvement and Professionalism: Martin emphasizes that writing clean code is part of being a professional software developer. Software development is a continuous learning process. Improving and refining skills over time is essential. Strive to leave code better than you found it, and continuously seek ways to enhance the quality of the codebase. Conclusion: Clean Code is both a guide and a philosophy for software developers who seek to improve their craft. It underscores the importance of writing clear, concise, and maintainable code, while promoting a disciplined, methodical approach to coding. This book has become essential reading for developers at all levels, from beginners to experienced professionals, due to its focus on building software that is easy to understand, maintain, and evolve. By following the principles in Clean Code, developers can produce software that is not only functional but also a joy to work with.\n","permalink":"https://blog.rulyotano.com/books/clean-code/","summary":"\u003cp\u003e\u003cem\u003eClean Code\u003c/em\u003e by Robert C. Martin, commonly referred to as \u0026ldquo;Uncle Bob,\u0026rdquo; is a foundational book in software engineering. It provides principles, best practices, and examples for writing clean, readable, and maintainable code. The book emphasizes the importance of professionalism and craftsmanship in software development and is structured around practical examples, case studies, and advice.\u003c/p\u003e\n\u003ch4 id=\"key-principles-and-concepts\"\u003e\u003cstrong\u003eKey Principles and Concepts:\u003c/strong\u003e\u003c/h4\u003e\n\u003col\u003e\n\u003cli\u003e\u003cstrong\u003eMeaningful Naming:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eNames should be descriptive and unambiguous, helping the reader understand the code\u0026rsquo;s purpose.\u003c/li\u003e\n\u003cli\u003eAvoid abbreviations and aim for clarity. For example, a variable name \u003ccode\u003eaccountBalance\u003c/code\u003e is better than \u003ccode\u003eaccBal\u003c/code\u003e.\u003c/li\u003e\n\u003cli\u003eGood names for variables, functions, and classes make the code self-explanatory.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eFunctions:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eFunctions should be small, perform a single task, and do it well.\u003c/li\u003e\n\u003cli\u003eThey should ideally be between 1-4 lines long, with each line of code contributing to the function\u0026rsquo;s purpose.\u003c/li\u003e\n\u003cli\u003eAvoid passing too many parameters; keep the number of arguments low to maintain readability and testability.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eComments:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eMartin argues that comments are often a sign of bad code. Instead, write code that is self-explanatory so that comments aren’t necessary.\u003c/li\u003e\n\u003cli\u003eWhen comments are used, they should clarify code rather than explain what is obvious.\u003c/li\u003e\n\u003cli\u003eFavor clear naming and structure over extensive commenting, as comments can become outdated and misleading over time.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eError Handling:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eHandle errors gracefully with clear and informative messages.\u003c/li\u003e\n\u003cli\u003eUse exceptions instead of return codes and avoid unnecessary try-catch blocks.\u003c/li\u003e\n\u003cli\u003eKeep error-handling code separate from the main logic to improve readability and reduce clutter.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eFormatting and Readability:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eConsistent formatting makes code easier to read and navigate.\u003c/li\u003e\n\u003cli\u003eGroup related code together, use meaningful whitespace, and maintain a logical flow within files.\u003c/li\u003e\n\u003cli\u003eUse indentation, line breaks, and consistent naming conventions to make code visually appealing and easier to scan.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eSingle Responsibility Principle (SRP):\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eEach class, function, or module should have one reason to change, meaning it should have a single responsibility.\u003c/li\u003e\n\u003cli\u003eThis principle promotes separation of concerns, making it easier to understand, test, and modify code.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eDRY (Don’t Repeat Yourself):\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eAvoid duplication by refactoring common functionality into functions or classes.\u003c/li\u003e\n\u003cli\u003eRepeating code leads to inconsistencies and increases the effort required to make changes.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eCode Smells and Refactoring:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eIdentifying \u0026ldquo;code smells\u0026rdquo; (indicators of deeper issues) is essential to maintaining clean code.\u003c/li\u003e\n\u003cli\u003eMartin emphasizes continual refactoring as a practice to improve and maintain code quality.\u003c/li\u003e\n\u003cli\u003eCommon smells include large classes, long methods, and too many parameters. Addressing these issues keeps code simple and more maintainable.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eUnit Testing and Test-Driven Development (TDD):\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eTests are an integral part of clean code. Writing tests ensures code functionality and simplifies future modifications.\u003c/li\u003e\n\u003cli\u003eMartin advocates for TDD, where tests are written before the code to ensure that every line of code has a purpose and is testable.\u003c/li\u003e\n\u003cli\u003eGood tests are fast, independent, repeatable, and cover as many edge cases as possible.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eContinuous Improvement and Professionalism:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eMartin emphasizes that writing clean code is part of being a professional software developer.\u003c/li\u003e\n\u003cli\u003eSoftware development is a continuous learning process. Improving and refining skills over time is essential.\u003c/li\u003e\n\u003cli\u003eStrive to leave code better than you found it, and continuously seek ways to enhance the quality of the codebase.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch4 id=\"conclusion\"\u003e\u003cstrong\u003eConclusion:\u003c/strong\u003e\u003c/h4\u003e\n\u003cp\u003e\u003cem\u003eClean Code\u003c/em\u003e is both a guide and a philosophy for software developers who seek to improve their craft. It underscores the importance of writing clear, concise, and maintainable code, while promoting a disciplined, methodical approach to coding. This book has become essential reading for developers at all levels, from beginners to experienced professionals, due to its focus on building software that is easy to understand, maintain, and evolve. By following the principles in \u003cem\u003eClean Code\u003c/em\u003e, developers can produce software that is not only functional but also a joy to work with.\u003c/p\u003e","title":"Clean Code: A Handbook of Agile Software Craftsmanship"},{"content":"Martin Kleppmann\u0026rsquo;s Designing Data-Intensive Applications is an in-depth guide that explores the architecture, design, and scalability of modern data-intensive systems. The book covers the fundamental concepts, trade-offs, and best practices for designing robust, scalable, and maintainable systems that process large amounts of data.\nKey Themes and Topics: Reliability, Scalability, and Maintainability: These three pillars form the basis for designing data-intensive applications. Reliability ensures the system works correctly even in adverse conditions. Scalability handles increasing loads, often through distributed systems. Maintainability ensures systems can evolve over time with minimal disruption. Data Models and Query Languages: The book delves into different data models, including relational (SQL), document (NoSQL), key-value, graph-based, and column-oriented databases. Kleppmann highlights the trade-offs in choosing the right data model and query language based on application needs. Storage and Retrieval: Discusses various storage engines, such as log-structured merge trees (LSM), B-trees, and SSTables. Explains indexing, caching, and other mechanisms to improve data access efficiency. The emphasis is on durability and consistency in data storage, whether in-memory or on disk. Distributed Data: Provides insights into handling distributed systems, including the challenges of consistency, consensus, replication, and partitioning. The book introduces distributed consensus algorithms, such as Paxos and Raft, as well as concepts like the CAP theorem (Consistency, Availability, Partition Tolerance) and the trade-offs between them. Discusses various forms of replication (e.g., leader-follower replication) and partitioning strategies. Consistency, Transactions, and Isolation: The book delves deeply into consistency models (e.g., eventual consistency, strong consistency) and the trade-offs between them. Kleppmann explains the importance of ACID (Atomicity, Consistency, Isolation, Durability) transactions in traditional databases and contrasts them with BASE (Basically Available, Soft state, Eventual consistency) in distributed systems. Distributed Systems Design: Describes various techniques to make distributed systems fault-tolerant, such as replication, leader election, and recovery from failures. Covers the importance of consensus algorithms (like Raft and Paxos) in ensuring consistent state across distributed systems. Batch and Stream Processing: Compares batch processing (e.g., Hadoop, Spark) and stream processing (e.g., Kafka, Storm) frameworks. Discusses the trade-offs between latency and throughput and how to choose the right architecture for real-time or near-real-time data processing. Dataflow and Message-Driven Architectures: Focuses on how data flows through systems in event-driven architectures using message queues and logs (e.g., Kafka). Explains how systems can be designed to be resilient to failure while maintaining performance through asynchrony and eventual consistency. Security and Privacy: Highlights best practices for securing data, ensuring encryption both at rest and in transit, and maintaining privacy standards, including compliance with regulations like GDPR. Conclusion: Designing Data-Intensive Applications is a practical, in-depth resource for software engineers, architects, and systems designers involved in building scalable, fault-tolerant data systems. It emphasizes understanding trade-offs and practical approaches to building reliable systems that handle vast amounts of data in distributed environments.\nThe book\u0026rsquo;s careful breakdown of complex topics—distributed systems, data modeling, replication, and processing paradigms—makes it essential reading for professionals dealing with large-scale data challenges in modern applications.\n","permalink":"https://blog.rulyotano.com/books/data-intensive-apps/","summary":"\u003cp\u003eMartin Kleppmann\u0026rsquo;s \u003cem\u003eDesigning Data-Intensive Applications\u003c/em\u003e is an in-depth guide that explores the architecture, design, and scalability of modern data-intensive systems. The book covers the fundamental concepts, trade-offs, and best practices for designing robust, scalable, and maintainable systems that process large amounts of data.\u003c/p\u003e\n\u003ch4 id=\"key-themes-and-topics\"\u003e\u003cstrong\u003eKey Themes and Topics:\u003c/strong\u003e\u003c/h4\u003e\n\u003col\u003e\n\u003cli\u003e\u003cstrong\u003eReliability, Scalability, and Maintainability:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eThese three pillars form the basis for designing data-intensive applications.\u003c/li\u003e\n\u003cli\u003eReliability ensures the system works correctly even in adverse conditions.\u003c/li\u003e\n\u003cli\u003eScalability handles increasing loads, often through distributed systems.\u003c/li\u003e\n\u003cli\u003eMaintainability ensures systems can evolve over time with minimal disruption.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eData Models and Query Languages:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eThe book delves into different data models, including relational (SQL), document (NoSQL), key-value, graph-based, and column-oriented databases.\u003c/li\u003e\n\u003cli\u003eKleppmann highlights the trade-offs in choosing the right data model and query language based on application needs.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eStorage and Retrieval:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eDiscusses various storage engines, such as log-structured merge trees (LSM), B-trees, and SSTables.\u003c/li\u003e\n\u003cli\u003eExplains indexing, caching, and other mechanisms to improve data access efficiency.\u003c/li\u003e\n\u003cli\u003eThe emphasis is on durability and consistency in data storage, whether in-memory or on disk.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eDistributed Data:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eProvides insights into handling distributed systems, including the challenges of consistency, consensus, replication, and partitioning.\u003c/li\u003e\n\u003cli\u003eThe book introduces distributed consensus algorithms, such as Paxos and Raft, as well as concepts like the CAP theorem (Consistency, Availability, Partition Tolerance) and the trade-offs between them.\u003c/li\u003e\n\u003cli\u003eDiscusses various forms of replication (e.g., leader-follower replication) and partitioning strategies.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eConsistency, Transactions, and Isolation:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eThe book delves deeply into consistency models (e.g., eventual consistency, strong consistency) and the trade-offs between them.\u003c/li\u003e\n\u003cli\u003eKleppmann explains the importance of ACID (Atomicity, Consistency, Isolation, Durability) transactions in traditional databases and contrasts them with BASE (Basically Available, Soft state, Eventual consistency) in distributed systems.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eDistributed Systems Design:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eDescribes various techniques to make distributed systems fault-tolerant, such as replication, leader election, and recovery from failures.\u003c/li\u003e\n\u003cli\u003eCovers the importance of consensus algorithms (like Raft and Paxos) in ensuring consistent state across distributed systems.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eBatch and Stream Processing:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eCompares batch processing (e.g., Hadoop, Spark) and stream processing (e.g., Kafka, Storm) frameworks.\u003c/li\u003e\n\u003cli\u003eDiscusses the trade-offs between latency and throughput and how to choose the right architecture for real-time or near-real-time data processing.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eDataflow and Message-Driven Architectures:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eFocuses on how data flows through systems in event-driven architectures using message queues and logs (e.g., Kafka).\u003c/li\u003e\n\u003cli\u003eExplains how systems can be designed to be resilient to failure while maintaining performance through asynchrony and eventual consistency.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eSecurity and Privacy:\u003c/strong\u003e\n\u003cul\u003e\n\u003cli\u003eHighlights best practices for securing data, ensuring encryption both at rest and in transit, and maintaining privacy standards, including compliance with regulations like GDPR.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch4 id=\"conclusion\"\u003e\u003cstrong\u003eConclusion:\u003c/strong\u003e\u003c/h4\u003e\n\u003cp\u003e\u003cem\u003eDesigning Data-Intensive Applications\u003c/em\u003e is a practical, in-depth resource for software engineers, architects, and systems designers involved in building scalable, fault-tolerant data systems. It emphasizes understanding trade-offs and practical approaches to building reliable systems that handle vast amounts of data in distributed environments.\u003c/p\u003e","title":"Designing Data-Intensive Applications"},{"content":" [Original Article] Sample on GitHub (WPF) Sample on Github (JavaScript) Live example in JavaScript (ReactJs) Introduction Interpolating points sometimes is hard mathematical work, even more, if the points are ordered. The solution is to create a function using the points and using an extra parameter t that represents the time dimension. This often is called a parametric representation of the curve. This article shows a simple way of interpolating a set of points using Bezier curves in WPF.\nBackground The idea of this solution comes after asking this question in Stack Overflow. The accepted answer makes references to a simple and interesting method proposed by Maxim Shemanarev, where the control points are calculated from the original points (called anchor points).\nHere, we create a WPF UserControl that draws the curve from any collection of points. This control can be used with the pattern MVVM. If any point\u0026rsquo;s coordinate changes, the curve also will change automatically. For instance, it can be used for a draw application, where you can drag \u0026amp; drop the points for changing the drawing, or curve.\nThe Algorithm Behind Due to the original antigrain site being down (I can find that Sourceforge is still supporting this library and we can find the original article over here!), I\u0026rsquo;m going to explain what is the algorithm proposed by Maxim Shemanarev.\nA Bezier curve has two anchor points (begin and end) and two control ones (CP) that determine its shape. Our anchor points are given, they are pair of vertices of the polygon. The question is, how to calculate the control points? It is obvious that the control points of two adjacent edges form one straight line along with the vertex between them.\nThe solution found is a very simple method that does not require any complicated math. First, we take the polygon and calculate the middle points Ai of its edges.\nHere, we have line segments Ci that connect two points Ai of the adjacent segments. Then, we should calculate points Bi as shown in this picture.\nThe third step is final. We simply move the line segments Ci in such a way that their points Bi coincide with the respective vertices. That\u0026rsquo;s it, we calculated the control points for our Bezier curve and the result looks good.\nOne little improvement. Since we have a straight line that determines the place of our control points, we can move them as we want, changing the shape of the resulting curve. I used a simple coefficient K that moves the points along with the line relative to the initial distance between vertices and control points. The closer the control points to the vertices are, the sharper figure will be obtained.\nThe method works quite well with self-intersecting polygons. The examples below show that the result is pretty interesting.\nThe Class for Calculation Below is the class that makes the calculation of the spline segments, based on the algorithm, exposed above. This class is named InterpolationUtils, it has a static method (named InterpolatePointWithBezierCurves) that returns a list of BezierCurveSegment, that will be the solution to our problem.\nThe class BezierCurveSegment has the four properties that define a spline segment: StartPoint, EndPoint, FirstControlPoint, and the SecondControlPoint.\nAs the above algorithm was originally implemented for closed curves, and it is desired that it can be applied for open curves too, a little change is needed. For this reason, the InterpolatePointWithBezierCurves method receives a second parameter, a boolean variable named isClosedCurve, which determines if the algorithm will return an open or closed curve. Since we take four points (x1 = the current point, x2 = the next point, but also are required two more points for creating the three edges. x0 = the current\u0026rsquo;s previous point and x3 = the next\u0026rsquo;s next point), the x0 and x3 points selection will be like this:\nIf it is a closed curve if x1 is the first point, then x0 is going to be the latest point (in this implementation, it is the latest but one because the latest point is the same as the first one), and if x2 is the latest point, then x3 is going to be the first point (in a similar way, in this implementation, is going to be the second point). If it is an open curve, then x0 = x1 and x3 = x2 for the previous cases. The User Control The user control that we propose is very simple to use, and it works with the MVVM pattern.\nThe LandMarkControl has only two dependency properties, one for the points, and another for the color of the curve. The most important property is the Points attached property. It is of IEnumerable type, and it assumes that each item has an X and Y properties.\nIn case the collection of points implements the INotifyCollectionChanged interface, the control will register to the CollectionChanged event, and if each point implements the INotifyPropertyChanged interface, the control also will register to the PropertyChanged event. In this way, every time any point is added or removed, or any point\u0026rsquo;s coordinates changed, the control will be refreshed.\nThis is the complete user control code behind:\nAnd this is the XAML code:\nExamples of Usage Using the control for creating the data template for the LandMarkViewModel:\nNow everywhere a LandMarkViewModel is displayed, this data template will show the item as a LandMarkControl. It needs to be rendered on a Canvas:\nThis is a final image example:\nReferences Bézier curve De Casteljau\u0026rsquo;s algorithm Interpolation By Bezier Curves ","permalink":"https://blog.rulyotano.com/blog/article/wpf-bezier-interpolation/","summary":"\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://main.codeproject.com/articles/Interpolate-2D-Points-Using-Bezier-Curves-in-WPF\"\u003e\u003cstrong\u003e[Original Article]\u003c/strong\u003e\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/rulyotano/WPF-Bezier-Interpolation/tree/master\"\u003e\u003cstrong\u003eSample on GitHub (WPF)\u003c/strong\u003e\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/rulyotano/rulyotano.crosscutting.js/tree/main/src/rulyotano.math.interpolation.bezier\"\u003e\u003cstrong\u003eSample on Github (JavaScript)\u003c/strong\u003e\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://rulyotano.com/demos/bezier\"\u003e\u003cstrong\u003eLive example in JavaScript (ReactJs)\u003c/strong\u003e\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eInterpolating points sometimes is hard mathematical work, even more, if the points are ordered. The solution is to create a function using the points and using an extra parameter \u003ccode\u003et\u003c/code\u003e that represents the time dimension. This often is called a parametric representation of the curve. This article shows a simple way of interpolating a set of points using Bezier curves in WPF.\u003c/p\u003e","title":"Interpolate 2D Points Using Bezier Curves in WPF (and Javscript)"},{"content":" [Original Article] In this write, I want to show two ways to create custom directory trees by using the tree-extended tool:\nBy using directly tree-extended in your SO command line by installing it as a node package. Or, by using the tree-extended vscode extension. Why tree-extended?\nI was documenting one of my projects and I wanted to write in markdown a directory tree representation, but I didn’t want to show all the directories but a particular one, the one that I was talking about in that section in the document. There is a command for Linux named tree that you can install, but it didn’t match all the requirements I was looking for. That is why I created tree-extended as a custom implementation of tree.\nUsing tree-extended in the command line\nFirst, we need to be sure we have installed NodeJs \u0026gt;= 6.x in our Mac or Pc.\nThen just open your terminal and run npm install tree-extended -g . This line will install this npm package globally, in this way you will be able to run the tree-extended command in your terminal.\nOk. Now we have it installed. We can runtree-extended -h to see the help. Here we get all the features we can do:\nSpecify a max tree depth we want to get Specify what to do when we got this max tree depth: print ... or nothing We can choose a charset to use, currently ascii , utf8 , and utf8-icons We can choose if we want to ignore the items defined in the .gitignore file. We can filter the items, by defining ignore and only filters. Here is the true magic. We can create these filters at tree-deep levels. For example: aaa, bbb, 0:ccc, 3:ddd, 3: ffff, 5:eee ... , this means: aaa and bbb are global, ccc is a filter that will apply to the root level, ddd and ffff will be applied to level 1, and so on… One interesting example Having this complete directory:\n├───a/ │ ├───aa/ │ ├───ab/ │ └───ac/ ├───a1/ ├───b/ │ ├───ba/ │ │ ├───bafile1.txt │ │ └───bafile2.txt │ ├───bb/ │ ├───bc/ │ │ └───bca/ │ │ └───bca-file1.txt │ ├───bd/ │ └───bfile1.txt ├───c/ ├───c1/ └───d/ ├───d1/ └───d2/ We run this command: tree-extended -only=\u0026quot;0:b, 1:bc, 2:bca\u0026quot; and we get:\n├───b/ │ └───bc/ │ └───bca/ │ └───bca-file1.txt └───ba/ We can see that we can restrict the search to only show one path to a file or to a directory. But we are doing it here by using0:b (at level 0 restrict to items matchingb). The problem is that this pattern (b) also matchba/. To avoid this, we can use regular expressions in the patterns:\nNow run instead: tree-extended -only=\u0026quot;0:b$, 1:bc, 2:bca\u0026quot; and we get:\n└───b/ └───bc/ └───bca/ └───bca-file1.txt In this way, by using 0:b$ we are using a regular expression that means: match all items ending in b . In this way, we exclude ba/ from the result.\nYou can find more examples and use cases by going directly to the library web.\nUsing tree-extended VsCode Plugin\nTo make this easier to use I decided to create an extension for VsCode. To install it just go to the Extensions Menu item and search for the tree-extended extension.\nGo to the Extension Menu and search ‘tree-extended’\nNow, to use it, is as simple as right-clicking on the directory you want to get the tree and clicking Get tree representation\nRight-click on /src and then click ‘Get tree representation’\nAfter this, the extension is going to ask if you want to use a custom configuration or the default one:\nChoose not to use a custom configuration\nBy choosing No and we are going to use the default configuration (we can change it in the plugin configuration settings)\nResult when using the default configuration for /src directory tree\nIf we click Yes to use a custom configuration we are able to specify the maximum directory tree deep and the only and ignore filters:\nUsing custom configuration to create the directory tree\nTo customize the rest of the options, we go to the plugin configuration settings:\nChanging plugin configuration settings\nAnd that’s it! Feel free to play with it, maybe it is helpful to you in the future.\n","permalink":"https://blog.rulyotano.com/blog/article/tree-extended-tool/","summary":"\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://medium.com/@rulyotano/tree-extended-a-tool-to-get-custom-directories-trees-3dea42ebf407\"\u003e\u003cstrong\u003e[Original Article]\u003c/strong\u003e\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eIn this write, I want to show two ways to create custom directory trees by using the \u003ca href=\"https://github.com/rulyotano/tree-extended\"\u003etree-extended\u003c/a\u003e tool:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003eBy using directly \u003ca href=\"https://github.com/rulyotano/tree-extended\"\u003etree-extended\u003c/a\u003e in your SO command line by installing it as a node package.\u003c/li\u003e\n\u003cli\u003eOr, by using the \u003ca href=\"https://marketplace.visualstudio.com/items?itemName=rulyotano.tree-extended\"\u003etree-extended vscode extension\u003c/a\u003e.\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003e\u003cstrong\u003eWhy tree-extended?\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003eI was documenting one of my projects and I wanted to write in markdown a directory tree representation, but I didn’t want to show all the directories but a particular one, the one that I was talking about in that section in the document. There is a command for Linux named \u003ccode\u003etree\u003c/code\u003e that you can install, but it didn’t match all the requirements I was looking for. That is why I created \u003ccode\u003etree-extended\u003c/code\u003e as a custom implementation of \u003ccode\u003etree\u003c/code\u003e.\u003c/p\u003e","title":"Tree-Extended, a tool to get custom directories trees"},{"content":" [Original Article] In this article, I want to show you how to create your personal website for free!\nTo get that, we are going to use GitHub Pages which will allow us to host our website and even will give us a secure (HTTPS) URL.\nOptionally, we can even use our own domain name, in such case, it is the only thing that you will need to pay for.\nAs the title says, we want to create the website using ReactJs.\nWhy?\nWell, you can create it more simply, by following the recommendations from GitHub Pages. You can even get a website template with mock data that you can change!\nBut, as a software developer who knows something about ReactJs, I wanted to create a custom personal page, which I can change in the future, use the UI controls and library that I want, or whatever. Maybe doing this is a bit more difficult but I think it is the price to pay to get customization.\nFinally, I used MaterialUI and NextJs libraries. I will explain why later.\nGitHub Pages GitHub Pages is a service that GitHub gives developers for free to create personal or company websites. You only need a GitHub user and then create a GitHub repository named \u0026lt;yourGithubUsername\u0026gt;.github.io . After that, you commit and push a file index.html to your repo, and that is it. You will have your site working (maybe it takes some minutes). You are able to check it at https://\u0026lt;yourGitHubUsername\u0026gt;.github.io .\nAs we can render static web pages (GitHub pages suggest using Jekyll to generate your site) I decided to use NextJs in order to create my ReactJs app. Next.js allows ReactJs server-side rendering, but also, it generates a static web app that we can just copy into the github.io repo and push it.\nWhy Next.js? We can use ReactJs to create Single Page Applications (SPAs), it has a route library (react-routes) which we can use to create multiple pages and more complex and complete applications in general. The problem is that we will need a Node.js app where to host it. That is why we decided to use NextJs.\nNext.js allows us to do server-side rendering, which is just creating the webpage before sending it to the browser and then adding ReactJs and the rest of the javascript code. This is very useful in terms of SEO. But also, NextJs build and export a static website, and we can just copy and paste it into your GitHub Page, which will work.\nMy goal in creating my personal web is to create a functional and customizable web in React, but not to use the purest react best practices, like using redux, react-routes, getting the data from APIs endpoints, or using GraphQL… That is why I get the data from static files and I don’t care about unit tests and so on.\nLet’s create our GitHub Pages repository As mentioned before, we need to create a new project with this pattern \u0026lt;yourGitHubUsername\u0026gt;.github.io , so, in my case, the repo name is rulyotano.github.io . After that, you may clone your repo into your local machine and create an index.html file with some content. After pushing the changes to GitHub, you should see something in https://\u0026lt;yourGitHubUsername\u0026gt;.github.io , it may take some seconds.\nHere on this point, it is good for you to know that you can use Jekyll to generate your website. It is the recommended option from GitHub Pages. They describe it like this:\nJekyll is a static site generator with built-in support for GitHub Pages and a simplified build process. Jekyll takes Markdown and HTML files and creates a complete static website based on your choice of layouts. Jekyll supports Markdown and Liquid, a templating language that loads dynamic content on your site.\nIf you don’t want to use Jekyll you are free to create your own static web. That is basically what we are going to do here.\nAnother important thing here is that you can enable HTTPS on your website, but also, you can also use your custom domain. I recommend following the guides as they are pretty well explained. Basically, you will need to go to your GitHub repository -\u0026gt; Settings tab -\u0026gt; Pages (under the Code and Automation section). Here you can configure both. Also, you will probably need to add some records to your domain provider configuration, and in order to verify you own this website, you will need to include a CNAME file in your code with your domain name as a value.\nLet’s create our Website project and repository We don’t need to create another repository for this project. We can directly update our GitHub Pages repo, but since we are going to create a Next.js + React.js project, it needs to be built and released, and the output is what we can copy into our Pages repo. If we don\u0026rsquo;t create this repo, we can’t update our website later, or at least we will have to do it directly in the output and will lose ReactJs benefits.\nFirst, we need to create the new GitHub repository, the name doesn’t matter. I just named it mui-profile (because I’m using Material-Ui). After that just clone it into your local.\nThen let’s create the Next.js application. Before continuing, If you like to know more about NextJs, just check this: What is Next.js?, and you also can find here (Getting Started | Next.js) the complete step-by-step guide to start and configure it.\nWe need to have Node.js installed (12.22.0 or later) and, in my case, I will use yarn (you can install yarn as another global npm package). As the guide says, just run this command to create it: yarn create next-app --typescript . I want to use typescript so I added the option--typescript . After installing all the dependencies you will have a complete and ready-to-go NextJs project.\nNote: after running the above command, NextJs will create his files into another directory named with the project name you choosed, which will add an extra depth level. I recommend avoiding this and moving all files and directories to the root folder.\nNow run the app:\ncd ./your-project-name yarn dev And that is it. If you had your port 3000 free you should be able to open http://localhost:3000 in your browser and see your brand new app.\nLet’s customize our web Now we have a React.js app ready and now it is up to you. If you go to the pages directory, you will find that there is where the pages are defined (currently only the index).\n└───📁 pages/ ├───📁 api/ │ └───... ├───📄 index.tsx └───📄 _app.tsx The _app.tsx file is like a parent to all the pages. You can include here the common code for all the pages. It receives a Component from the props which basically is the child page. I use it to add here all the common things, like styles, headers, and so on.\nFrom here, I’m only going to give you some tips based on issues that I had when I was customizing my web with Material-UI.\nThe first thing we need to configure is the library that we want to use to be able to work properly when doing server rendering. To that, we can use the “Custom Document”, and set up it there. In the latest version of MUI here is what we need to do in order to configure it. In my case, I used the MUI version 4.x but as frontend libraries go so fast, I see now is version 5.x and it brings some breaking changes. This is how it looks in my case:\nThe key is what we do inside the .getInitialProps function. We do whatever we need in order to execute the custom js libraries before generating the HTML, which will depend on the library you want to use and the version.\nAnother problem I found was that the MUI javascript and ccs were overlapping with already existing ones in the server-side generated HTML. To solve that I just removed those javascript files from the html body. To do that I added the following hook to the _app.tsx file:\nReact.useEffect(() =\u0026gt; { const jssStyles = document.querySelector(\u0026#39;#jss-server-side\u0026#39;); if (jssStyles) { jssStyles.parentElement.removeChild(jssStyles); } }, []); Having several pages To have several pages you need to do a couple of things. First, you need to add it to the pages directory. For example, if you add articles.tsx to your pages directory, that will create a /articles route.\nAlso, in order to work properly with the server-side rendering, I needed to add an entry to the exportPathMap method in the next.config.js file:\nexportPathMap: function () { return { \u0026#39;/\u0026#39;: { page: \u0026#39;/\u0026#39; }, \u0026#39;/articles\u0026#39;: { page: \u0026#39;/articles\u0026#39; }, // \u0026lt;---- add this }; } You can add more complex routes, like dynamic for example. I recommend reading the NextJs routing documentation.\nLet’s talk about the content I just wanted to avoid using databases and API requests in this project. That is why I just added it to a .json file. You can do it as you prefer.\nYou may store the images in the public/ directory and then just add a local reference to them:href=\u0026quot;/img/yourimage.png\u0026quot; or in your data source.\nBut in my case, I just did a trick I found to use Google Photos as an image source. I liked it because:\nDon’t overload the repository or the web server by exposing them. I can easily request an image in the size I want. The procedure is the following:\nCreate a google photos album. Share it by creating a public link. Now you can add to the album the images you want to get from your website. Open the album’s link that you already created in step 2 in an incognito web browser. Open any image and right-click and copy the image URL you can use in your content. Now comes the fun part that Google does for us. You may notice the image URL ends with something like =w1239-h929-no , that is the size of the image that I want to get. So, if I just want a small image to show in list items I prefer to get it with w30 (width 30px) instead.\nSo, what I do is save the image without that part in my static content data. And then I created a js-helper to get the images in the size I want before rendering:\nAnd then just use it to get the final image url:\nconst imageWithSize = useMemo(() =\u0026gt; getGoogleImageWithSize(image, 100), [image]); =================== \u0026lt;cardmedia component=\u0026#34;img\u0026#34; alt=\u0026#34;{title}\u0026#34; height=\u0026#34;100\u0026#34; image=\u0026#34;{imageWithSize}\u0026#34; title=\u0026#34;{title}\u0026#34; classname=\u0026#34;{classes.image}\u0026#34;\u0026gt; After you get your site ready (or at least the first version) you can do yarn build in your terminal. This will generate an out directory with your website ready to copy to your GitHub Pages. Copy the content to your github.io project (you should delete all the previous content before, or at least, override it), commit, and push it. In a few minutes, you should be able to see the new changes on your website.\nThere is a couple of things that I needed to do here. The first one is to create an empty .nojekyll file in the output directory (the root of the website). This is needed to enable javascript code to work properly. Maybe this changed in the latest versions, at least I had this issue.\nThe next thing I needed to do, was to add a CNAME file to the root of the website, in order to verify the domain name. The content of this file should be your domain name.\nIn order to get this done automatically when running our yarn build command in the terminal, I just modified that command a bit. Go to your package.json file and in the section scripts override the build entry with the following (change rulyotano.com by your domain name):\n\u0026#34;build\u0026#34;: \u0026#34;next build \u0026amp;\u0026amp; next export \u0026amp;\u0026amp; echo \u0026#39;\u0026#39;\u0026gt; ./out/.nojekyll \u0026amp;\u0026amp; echo rulyotano.com \u0026gt; ./out/CNAME\u0026#34;, This should be enough! But, in order to update my profile webpage do I need to update and commit and push two repositories? In the next section, I will explain how I handled to do this process automatically.\nAutomate your deployment with GitHub Actions As the title says, yes, we are going to use GitHub Actions to build, test, copy the files to the github.io repo, and push them. I don’t know if you are familiar with GitHub Actions, if not, I just want to say it is a mechanism GitHub created to automate things in your projects. We can use it to implement our CD/CI pipelines (Continous Deployment/Continous Integration).\nFirst, we need to create a GitHub token to authorize the external processes to do changes within our GitHub repositories. To do that, go to your GitHub settings page (top-right corner menu \u0026gt; Settings), then to Developer settings and finally to Personal access tokens (or you can get there just using this link). In this place generate the new access token and save the value. Be careful because when you leave this page you won’t see the value again.\nYou will add this token to the secrets of your ReactJs website repo. Go to the project’s page and then to Settings \u0026gt; Secrects \u0026gt; Actions and click New repository secret. Copy your personal access token and add a name, I used API_TOKEN_GITHUB .\nThe next step is to create the action pipeline. In the same repo, just create a .yml file in this location .github/workflows . Then you can copy the content from the following file and adapt it to your needs:\nAfter that, if everything is ok, every time your master branch is updated also you will be updating your github.io repo and your website. In this way, you will only need to update one repository and it becomes a lot easier.\nSummary As result, from this article, we will get a React.js website, hosted by GitHub pages that we can easily update by accepting a PR or by making a push. I hope you can get something from here. The following links are the projects that I have created and the final result (my personal website) in case you want to use them as guidance:\nMy GitHub page repository My profile webpage repository My personal website ","permalink":"https://blog.rulyotano.com/blog/article/build-your-personal-website-free-reactjs/","summary":"\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://medium.com/better-programming/get-your-personal-website-for-free-create-it-with-reactjs-b7e3c3c874b4\"\u003e\u003cstrong\u003e[Original Article]\u003c/strong\u003e\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eIn this article, I want to show you how to create your personal website for free!\u003c/p\u003e\n\u003cp\u003eTo get that, we are going to use \u003ca href=\"https://pages.github.com/\"\u003eGitHub Pages\u003c/a\u003e which will allow us to host our website and even will give us a secure (HTTPS) URL.\u003c/p\u003e\n\u003cp\u003eOptionally, we can even use our own domain name, in such case, it is the only thing that you will need to pay for.\u003c/p\u003e\n\u003cp\u003eAs the title says, we want to create the website using ReactJs.\u003c/p\u003e","title":"Build Your Personal Website for Free Using React.js"}]