Full-Stack MasteryFrom "I can code" → "I can design & ship production systems"
A 12-week intensive curriculum designed by engineers, for engineers. Build real projects, harden them for production, and leave with stronger interview and portfolio proof.
Fundamentals: Core 12 Skills (C/C++ Route)
This is the foundation every engineer needs: C for understanding the machine, C++ for abstraction at scale, and the professional habits that make you reliable in any stack.
- •Compilation pipeline: preprocess, compile, link
- •Pointers, memory layout, stack vs heap
- •Core C standard library (stdio, stdlib, string)
- •Undefined behavior and defensive coding
- •Explain how source code becomes an executable.
- •Write small memory-safe C programs with explicit allocation and cleanup.
- •What changes between preprocessing, compilation, and linking?
- •What is the difference between stack and heap allocation?
- •RAII, ownership, and resource management
- •Classes, templates, and STL containers
- •Move semantics and smart pointers
- •Zero-cost abstractions and design tradeoffs
- •Use RAII and smart pointers to model ownership clearly.
- •Reach for STL containers and templates without hiding performance tradeoffs.
- •When should ownership live in a smart pointer versus a stack object?
- •What does move semantics save you from copying?
- •Arrays, linked lists, stacks, queues
- •Trees, tries, and hash tables
- •Memory ownership patterns in C/C++
- •API design for reusable data structures
- •Implement reusable data structures with a clean public API.
- •Choose ownership and mutation patterns intentionally instead of accidentally.
- •Why is API design as important as internal implementation here?
- •Which data structures benefit most from explicit ownership rules?
- •Big-O analysis and practical tradeoffs
- •Sorting and searching (quick/merge/binary)
- •Recursion, iteration, and dynamic programming
- •Benchmarking and input sizing
- •Compare algorithms using both asymptotic and practical performance.
- •Use benchmarking to validate assumptions instead of guessing.
- •When does a better Big-O result still lose in practice?
- •How do you choose a representative benchmark input?
- •Processes, threads, and scheduling basics
- •Syscalls and POSIX APIs
- •Memory management and virtual memory
- •Concurrency hazards: races and deadlocks
- •Describe how processes, threads, and memory management interact.
- •Spot race conditions and deadlock risks earlier in design.
- •What changes when work moves from one process to many threads?
- •Why are races often harder to reproduce than syntax bugs?
- •File descriptors, buffering, and streams
- •Binary vs text formats and serialization
- •Parsing and validation strategies
- •Error handling and recovery paths
- •Read and write structured data with stronger validation and error handling.
- •Choose file and serialization formats based on real access patterns.
- •What makes binary formats faster but harder to inspect?
- •How do you recover safely from partial or malformed input?
- •TCP vs UDP and socket primitives
- •HTTP request/response lifecycle
- •Concurrency models for servers
- •Timeouts, retries, and reliability
- •Build simple network services with clearer reliability assumptions.
- •Explain retries, idempotency, and timeout behavior using real examples.
- •When is TCP the right choice over UDP?
- •Why can retries create duplicate work if you ignore idempotency?
- •GDB/LLDB workflows and breakpoints
- •Valgrind and sanitizers
- •Profiling CPU and memory hot spots
- •Logging strategies for systems code
- •Use debuggers and sanitizers as first-class tools instead of last resorts.
- •Produce evidence-driven profiling notes that explain where time or memory goes.
- •When should you reach for a debugger versus extra logging?
- •What kind of bugs do sanitizers expose well?
- •Make and CMake fundamentals
- •Static vs shared libraries
- •Compiler flags and optimization levels
- •Cross-platform build hygiene
- •Ship repeatable builds with consistent flags, tests, and dependency layout.
- •Explain the difference between compiling, archiving, and linking.
- •When do you prefer static libraries over shared libraries?
- •Which build details most often break portability?
- •Git workflows and code review hygiene
- •Testing strategy: unit, integration, regression
- •Design docs and tradeoff communication
- •Security basics: input validation and secrets
- •Write engineering notes and reviews that make your tradeoffs legible to others.
- •Use testing and security basics as part of day-to-day delivery, not only before launch.
- •What should every small design note include?
- •Why is input validation still a systems concern?
- •Shell navigation, pipes, and redirection
- •Process inspection and environment variables
- •Simple automation with scripts and cron-like thinking
- •Using CLI tooling to accelerate debugging and delivery
- •Move faster in the terminal without depending on GUI shortcuts.
- •Automate repetitive engineering tasks with small scripts.
- •What problem do pipes solve well in daily engineering work?
- •When is a script better than repeating a command manually?
- •Threat modeling and trust boundaries
- •Secrets management and least privilege
- •Backups, recovery paths, and rollback planning
- •Rate limiting, audit logs, and incident hygiene
- •Spot obvious security and reliability gaps before production discovers them for you.
- •Design safer rollout and recovery plans for small systems.
- •What is the first trust boundary in the system you are building?
- •Why are backups useless without restore practice?
Daily Battle Plan (Bismillah)
A smart daily loop: build the core, train the patterns, and ship. Mark each day complete to keep momentum.
- •Pointers, memory layout, and ownership
- •RAII and smart pointers
- •Write and read low-level code
- •Two pointers or sliding window
- •Hash maps and frequency counting
- •Time/space tradeoffs
- •Processes, threads, and synchronization
- •Race conditions and deadlocks
- •Context switching intuition
- •Sockets, TCP lifecycle, and timeouts
- •HTTP request/response anatomy
- •Retries and idempotency
- •GDB/LLDB breakpoints
- •Sanitizers and valgrind
- •Profiling hot paths
- •CMake or Makefile basics
- •Unit tests and CI mindset
- •Release notes and versioning
- •Error logs and lessons learned
- •Tradeoff storytelling
- •System design warmup
Practice Lab: Live Coding Drills
Jump into live drills with test feedback and a mistake library so you avoid the most common wrong turns.
Start Practice LabEngineering Workflow Lifecycle
Learn how real code moves from a laptop to a safe release: local Git, GitHub collaboration, branch policy, testing, CI, CD, and rollback-ready delivery.
Boss Fights (Checkpoints)
These checkpoints demand real execution. Complete the prerequisites, then earn the badge.
- •C/C++ fundamentals applied under pressure
- •Data structures implementation review
- •Explain tradeoffs in plain language
- •OS + networking mental models
- •Concurrency with real constraints
- •Debugging and incident response
- •System design tradeoffs at scale
- •Testing strategy and release hygiene
- •Operational readiness checklist
Role Tracks: Choose Your Route
Choose the path that matches your target role: frontend, backend, AI, SDE, data, or DevOps. Each track is sequential, and each stage builds on the one before it.
Frontend Engineer Route
UI architecture, design systems, visual language, performance, and production frontend delivery.
- •Semantic HTML
- •CSS layout systems
- •Responsive design
- •Accessibility basics
- •Ship clean, semantic interfaces that work well on desktop and mobile.
- •Use layout, spacing, and accessibility as first-class frontend concerns.
- •What breaks first when a layout moves from desktop to mobile?
- •Why does semantic structure matter for accessibility and maintainability?
- •Page uses semantic sections, accessible headings, and keyboard-visible focus states.
- •Layout remains readable on a narrow mobile viewport without content collisions.
- •Reusable shell components can support a second page without duplication.
- •Design tokens
- •shadcn/ui composition
- •IBM Carbon governance
- •Pattern reuse across pages
- •Choose a design system approach that fits team size, speed, and product scope.
- •Turn one-off UI choices into reusable component and token rules.
- •Why might a small startup choose shadcn/ui differently from a large enterprise team using Carbon-style governance?
- •Which product decisions should become design tokens instead of ad hoc values?
- •The design system approach is justified with team and product constraints.
- •At least one screen is rebuilt with reusable primitives and tokens.
- •Another engineer can describe when to reuse, extend, or reject a component pattern.
- •Typography scale
- •Spacing rhythm
- •Icons + stroke systems
- •Studying other websites critically
- •Explain why visual consistency affects clarity, trust, and implementation speed.
- •Create a reusable visual system for typography, spacing, icons, and borders.
- •What happens when icons, borders, and illustrations use inconsistent stroke weights?
- •How do you study another website without blindly copying it?
- •Visual language includes concrete rules for type, spacing, radius, icon style, and stroke weight.
- •Website audits produce reusable design rules rather than screenshots with vague opinions.
- •A second screen can be designed using the same system without inventing new visual rules.
- •Component composition
- •State management
- •Forms + validation
- •Data fetching patterns
- •Model frontend state intentionally instead of scattering state across components.
- •Build flows that stay usable across loading, success, and error states.
- •When should state stay local and when should it move higher?
- •What makes a form feel trustworthy instead of fragile?
- •Flow handles loading, success, empty, and error states without confusing the user.
- •Validation exists both in the UI and at the form submission boundary.
- •State transitions are simple enough that another engineer can trace them quickly.
- •Rendering performance
- •Code splitting
- •Testing strategy
- •Release confidence
- •Diagnose slow renders and bundle problems with evidence.
- •Build component systems that scale without visual drift.
- •What causes a component tree to rerender too often?
- •How do design systems reduce both bugs and design inconsistency?
- •You can point to measured performance improvements rather than subjective claims.
- •Critical UI components share tokens, patterns, and predictable APIs.
- •Key user flows have frontend tests that protect against regressions.
- •Auth UX
- •Observability
- •Edge delivery
- •Analytics + experimentation
- •Treat delivery, measurement, and reliability as part of frontend engineering.
- •Launch interfaces with stronger instrumentation and rollback thinking.
- •What signals tell you a frontend release is actually healthy?
- •How should a frontend team think about rollback safety?
- •Critical auth flows, experiments, and analytics events are instrumented and reviewable.
- •Release plan includes rollback or feature-flag fallback, not just deployment.
- •Monitoring can distinguish frontend failure, API failure, and content failure.
Backend Engineer Route
APIs, data systems, indexing, monitoring, reliability, and secure service design at production depth.
- •Node runtime model
- •HTTP contracts
- •Validation
- •Service ownership boundaries
- •Model runtime and API behavior clearly enough for other teams to consume safely.
- •Build backend features with cleaner validation and service boundaries.
- •Why should a backend engineer understand the runtime before designing service behavior?
- •Why should validation exist at the system boundary instead of only in the UI?
- •API handlers validate requests, return consistent responses, and map clearly to service boundaries.
- •Startup configuration and request validation fail safely instead of silently.
- •A second engineer could consume the service without reverse-engineering the contract.
- •Schema design
- •Database access patterns
- •Relationships
- •Query tradeoffs
- •Model data clearly enough that queries and APIs stay understandable.
- •Choose schema boundaries based on access patterns instead of vague intuition.
- •When is the schema telling you the product model is unclear?
- •Which query patterns should shape the schema earliest?
- •Key entities and access patterns are documented together.
- •Data model supports the product workflow without ambiguous ownership.
- •Another engineer can explain why the schema fits the use case.
- •Indexes
- •Query plans
- •Selectivity
- •Read/write tradeoffs
- •Reason about query performance using indexes and plans instead of guesswork.
- •Explain why read optimization changes write cost and operational tradeoffs.
- •What access pattern makes an index worthwhile?
- •Why can too many indexes hurt write-heavy systems?
- •A slow query is tied to an explicit indexing decision.
- •The tradeoff between performance and maintenance overhead is explained clearly.
- •The learner can describe what a query plan is telling them at a practical level.
- •Authentication
- •Authorization
- •Caching strategies
- •Queues + background jobs
- •Choose the right mechanism for protected access and expensive work.
- •Use queues and caching to improve latency without losing correctness.
- •What should stay synchronous and what should move to background work?
- •Which cache invalidation strategy matches the consistency requirement?
- •Protected endpoints enforce both authentication and authorization rules.
- •At least one expensive flow is moved to async processing with visible latency improvement.
- •Cache behavior and invalidation policy are documented, not implicit.
- •Metrics, logs, traces
- •Monitoring systems
- •SLIs + SLOs
- •Alerting discipline
- •Instrument systems so incidents are diagnosable under pressure.
- •Define monitoring signals that map to real user impact instead of vanity metrics.
- •What makes an alert actionable enough to wake someone up?
- •Which telemetry helps you identify a backend bottleneck fastest?
- •Metrics, logs, and traces make at least one failure mode diagnosable end to end.
- •At least one SLO is written in concrete terms and tied to a user-facing outcome.
- •The monitoring plan distinguishes useful pages from low-signal noise.
- •Threat modeling
- •Rate limiting
- •Runbooks + on-call
- •Capacity planning
- •Own a backend service end to end, including operational and security responsibilities.
- •Communicate service risks and mitigations clearly before launch.
- •Where is the highest-risk trust boundary in your service?
- •What does readiness look like beyond passing tests?
- •Threat model, rate limits, and abuse controls are documented for the service.
- •Runbook includes alert triage, rollback, and recovery steps.
- •Capacity and failure assumptions are written down and reviewable before release.
SDE Route
End-to-end product delivery with strong backend and system design fundamentals.
- •HTTP + REST APIs
- •Auth + data modeling
- •Frontend state + UX
- •Testing basics
- •Caching + queues
- •Observability + logs
- •Error handling + retries
- •Performance profiling
- •Load balancing
- •Data partitioning
- •Consistency tradeoffs
- •Capacity planning
- •Deploy + monitor
- •Feature flags + rollbacks
- •Runbooks + SLAs
SDE II Route
Distributed systems depth, reliability leadership, and architecture reviews.
- •Replication
- •Leader election
- •Consistency models
- •Failure modes
- •Latency budgets
- •Caching layers
- •Profiling
- •Cost modeling
- •SLOs + error budgets
- •Incident response
- •Chaos testing
- •Postmortems
- •Design docs
- •Mentoring
- •Cross-team reviews
- •Roadmapping
Data Engineer Route
Data pipelines, warehousing, and reliability at scale.
- •Star schema
- •Normalization
- •Indexes
- •Data contracts
- •Pipelines
- •Idempotency
- •Scheduling
- •Validation
- •Event ingestion
- •Stream processing
- •Partitioning
- •Warehouses
- •Lineage
- •Access control
- •Quality checks
- •Cost + SLAs
Data Science Route
Statistics, modeling, and deployment with real-world rigor.
- •Pandas + EDA
- •Distributions
- •Hypothesis testing
- •Visualization
- •Feature engineering
- •Train/test split
- •Baseline models
- •Bias/variance
- •Metrics
- •Cross-validation
- •A/B testing
- •Data leakage
- •Model serving
- •Drift detection
- •Monitoring
- •Rollbacks
AI Engineer Route
AI evolution, compute, tokens, model selection, agent workflows, productivity design, and AI product delivery with engineering rigor.
- •Symbolic AI vs ML
- •Deep learning to foundation models
- •AI product systems
- •Why history matters
- •Explain the major eras of AI without collapsing them into one vague category.
- •Describe an AI feature as a system with model, context, tools, and safeguards.
- •What changed when the field moved from hand-written rules to data-driven models?
- •Why is an AI product more than an API call to a model?
- •Learner can distinguish symbolic AI, ML, deep learning, and LLM systems clearly.
- •The feature is explained as a full system rather than a single prompt.
- •Historical framing reduces hype and improves technical precision.
- •GPUs, TPUs, NPUs
- •Training vs inference
- •Memory + throughput
- •Why compute shapes pricing
- •Explain why AI chips and accelerators matter to product behavior and cost.
- •Describe the difference between training workload and serving workload.
- •Why do latency and memory matter so much for inference?
- •What cost tradeoffs appear when traffic grows from demo scale to real usage?
- •Learner can explain compute constraints in product terms, not only hardware jargon.
- •Training and inference are distinguished clearly.
- •The lesson connects cost, latency, and chip constraints to real product decisions.
- •Tokenization
- •Context windows
- •Embeddings
- •Attention + transformer intuition
- •Explain the prompt-to-output path with correct token and embedding terminology.
- •Connect context limits and transformer behavior to real product tradeoffs.
- •What is the role of tokenization before a model can respond?
- •How does attention change what the model can focus on?
- •Learner can explain token budgets and context windows with concrete examples.
- •Embeddings and attention are connected to retrieval and answer quality.
- •Common misconceptions about “the model understands everything in the window” are corrected.
- •GPT, Claude, Gemini
- •Llama, Qwen, Mistral
- •Closed vs open-weight models
- •Latency, safety, and cost fit
- •Choose models based on task requirements instead of hype.
- •Separate reasoning quality, tool use quality, and cost efficiency in evaluation.
- •When does an open-weight model make more sense than a hosted API model?
- •Which model characteristics matter most for coding or agent workflows?
- •Model comparison includes quality, latency, safety, cost, and operational fit.
- •Recommendation clearly states why one model is chosen for one workload and not another.
- •Learner can name major model families and their practical tradeoffs.
- •Prompt structure
- •Schema-bound outputs
- •Function calling
- •Execution guardrails
- •Design prompts and outputs that product code can validate and trust more easily.
- •Explain why tool execution must stay constrained by application logic.
- •Why are schemas safer than free-form answers for product logic?
- •When should a tool call require human confirmation?
- •Output contract is explicit enough for validation and retries.
- •Tool use is guarded by product-side rules rather than only instructions.
- •The system has a fallback path for invalid or ambiguous tool calls.
- •RAG pipelines
- •Planning loops
- •Single-agent vs multi-agent
- •Coding-agent workflows
- •Design grounded AI systems that retrieve, reason, and act more reliably.
- •Use agents as structured workflows instead of novelty features.
- •What failure mode does retrieval solve better than prompting alone?
- •What tasks are safe for autonomous coding agents and which still need tight human review?
- •Assistant retrieves relevant context instead of answering unsupported questions from memory alone.
- •Agent workflow can inspect, plan, edit, and verify without skipping safety boundaries.
- •The learner can explain when not to use an agent.
- •Quality evals
- •Safety checks
- •Latency budgets
- •Cost budgets
- •Turn AI quality into something measurable rather than anecdotal.
- •Balance user value with cost, latency, and safety constraints.
- •What would count as failure for this AI feature?
- •How slow or expensive is too slow or expensive for this product surface?
- •Feature has explicit evaluation criteria beyond anecdotes.
- •Latency and cost budgets are written down and reviewable.
- •Guardrails target real product risks instead of vague concerns.
- •Workflow redesign
- •Human-in-the-loop design
- •AI productivity measurement
- •How AI changes CS and engineering roles
- •Explain where AI amplifies software engineering and where fundamentals still dominate.
- •Design AI-assisted workflows with clearer human ownership and measurable productivity goals.
- •Which CS fundamentals become more important, not less, in an AI-heavy workflow?
- •What metric would prove a workflow is actually more productive after AI is introduced?
- •Memo clearly explains AI’s impact on systems thinking, abstractions, and engineering roles.
- •Workflow design includes human ownership, evaluation, and fallback behavior.
- •Learner can argue why core CS skills still matter in an AI-assisted workflow.
DevOps Route
Automation, infrastructure, observability, and platform reliability.
- •Pipelines
- •Build/test
- •Artifacts
- •Release gates
- •Terraform basics
- •Environments
- •Secrets
- •State management
- •Metrics, logs, traces
- •Alerting
- •SLOs
- •On-call
- •Developer portals
- •Golden paths
- •Self-service
- •Policy-as-code
Build Plan: Clear Increments and Rewards
Each increment is a concrete deliverable. Complete them in order and collect milestone rewards.
- •Finalize core tracks and sequencing
- •Wire completion, XP, and streaks
- •Define clear outcomes for each module
- •Define a learning path that is sequenced instead of random.
- •Ship visible progress state that motivates return visits.
- •What makes one module unlock cleanly into the next?
- •How should a learner know they are actually improving?
- •Daily battle plan drills and timers
- •C/C++ fundamentals resource pack
- •Checkpoints and review prompts
- •Turn passive reading into a daily training loop.
- •Package resources so learners can move faster without hunting for context.
- •What makes a daily drill sustainable instead of overwhelming?
- •Which resource packs reduce friction the most for beginners?
- •Weekly boss fight assessments
- •Scoring rubrics and feedback loops
- •Promotion criteria for next phase
- •Measure readiness with explicit rubrics instead of vague intuition.
- •Create feedback loops that tell learners what to fix next.
- •What makes a checkpoint feel fair and actionable?
- •How should a rubric separate correctness from explanation quality?
- •Unit/integration/e2e coverage goals
- •Accessibility and performance budgets
- •CI workflow for consistent quality
- •Protect curriculum changes with quality checks and accessibility baselines.
- •Use CI to keep learning content stable as the platform grows.
- •Why do quality gates matter for learning products too?
- •What should fail a release even if the page still renders?
- •Copy pass and onboarding clarity
- •Progress share card and milestones
- •Release checklist and analytics hooks
- •Improve onboarding and milestone clarity without changing the UI language.
- •Prepare the curriculum for real learner traffic instead of local demos.
- •What does a first-time learner need on day one to keep going?
- •Which analytics hooks prove the content is actually being used?
- •Capstone packaging and showcase pages
- •Mentor review loops and feedback templates
- •Interview prep artifacts and portfolio storytelling
- •Convert finished lessons into visible proof of growth.
- •Add mentor-friendly review materials without rebuilding the interface.
- •What artifact proves a learner can ship and explain work?
- •How should mentor feedback feed the next study cycle?
Daily Track: Data Structures & Algorithms
Master the patterns that appear in 90% of coding interviews. Practice daily to build muscle memory.
Phase 1: The Foundation (Weeks 1–3)
Build a functional monolithic application with modern best practices.
- •React Hooks (useState/useEffect/useContext/useReducer)
- •Component lifecycle & performance optimization
- •State management patterns (Context API, Zustand)
- •Tailwind CSS + responsive design principles
- •Accessibility (ARIA, semantic HTML)
- •Build a responsive, accessible UI with strong state management.
- •Explain core React hooks and performance tradeoffs.
- •Ship a polished dashboard with persistence.
- •When should you use useMemo or useCallback?
- •What ARIA roles are required for a custom list UI?
- •HTTP verbs, status codes, and headers
- •Resource-oriented URL design
- •Request/Response validation (Zod, Joi)
- •Express.js middleware patterns
- •Error handling & logging best practices
- •Design clean REST APIs with validation.
- •Implement middleware for logging and errors.
- •Return consistent responses with correct status codes.
- •What is the difference between 400 and 422?
- •How does Express middleware order affect behavior?
- •SQL (PostgreSQL) vs NoSQL (MongoDB) - when to use each
- •ACID properties & transaction management
- •Database normalization & denormalization
- •Indexing strategies & query optimization
- •ORMs: Prisma, Drizzle, or Mongoose
- •Model relations and choose the right database.
- •Apply indexing for query performance.
- •Ship migrations with reliable data access.
- •When should you denormalize a schema?
- •How do you detect an N+1 query problem?
Phase 2: Production-Ready Backend (Weeks 4–6)
Harden your application with security, caching, and automated testing.
- •JWT vs session-based auth (tradeoffs)
- •OAuth2 & OpenID Connect (Google, GitHub login)
- •Password hashing (bcrypt, argon2)
- •RBAC (Role-Based Access Control)
- •OWASP Top 10: XSS, CSRF, SQL Injection, CORS
- •Implement secure auth flows with hashing.
- •Protect routes with roles and permissions.
- •Explain common OWASP risks and mitigations.
- •When should you use sessions instead of JWT?
- •How do you prevent CSRF and XSS?
- •Redis for caching & session storage
- •Cache invalidation strategies (TTL, LRU)
- •CDN basics (Cloudflare, CloudFront)
- •Database query optimization (N+1 problem)
- •Response compression & pagination
- •Design cache strategies with safe invalidation.
- •Measure and explain performance improvements.
- •Avoid common database bottlenecks.
- •When do you choose TTL vs manual invalidation?
- •What causes the N+1 problem and how do you fix it?
- •Unit tests (Jest, Vitest)
- •Integration & E2E tests (Playwright, Cypress)
- •Test coverage & quality metrics
- •CI/CD with GitHub Actions
- •Automated deployments & rollback strategies
- •Write a reliable test suite across layers.
- •Automate tests in CI for every change.
- •Plan safe deployments and rollbacks.
- •What is the difference between unit and integration tests?
- •What does coverage miss even when it is high?
Phase 3: System Design & Scale (Weeks 7–10)
Design and build systems that scale to millions of users.
- •Monolith vs Microservices (when to use each)
- •Load balancing (L4 vs L7, NGINX, HAProxy)
- •Horizontal vs vertical scaling strategies
- •Stateless vs stateful services
- •Service mesh basics (Istio, Linkerd)
- •Choose an architecture style for the problem.
- •Explain scaling tradeoffs with confidence.
- •Map services using the C4 model.
- •When would you choose a monolith over microservices?
- •What is the difference between L4 and L7 load balancing?
- •CAP theorem & consistency models
- •Eventual consistency & conflict resolution
- •Message queues (Kafka, RabbitMQ, SQS)
- •Rate limiting algorithms (Token bucket, Leaky bucket)
- •Bloom filters, Consistent hashing, HyperLogLog
- •Pick consistency models based on requirements.
- •Design async workflows with queues and retries.
- •Apply advanced data structures for scale.
- •When is eventual consistency acceptable?
- •How do token bucket and leaky bucket differ?
- •Capacity estimation & back-of-envelope calculations
- •Database selection (SQL vs NoSQL vs NewSQL)
- •Failure modes & fault tolerance
- •Monitoring & observability (Prometheus, Grafana)
- •Tradeoff analysis & storytelling
- •Estimate capacity and justify assumptions.
- •Choose databases and explain tradeoffs.
- •Design observability and failure handling.
- •What is a good back-of-envelope throughput estimate?
- •Which metrics define a healthy system?
- •Frontend + API + Database + Auth
- •Real-time features (WebSockets, Server-Sent Events)
- •Caching layer & CDN integration
- •Monitoring, logging, and alerting
- •Production deployment with zero-downtime
- •Ship a full-stack capstone with real-time features.
- •Deploy with monitoring and zero-downtime rollout.
- •Demonstrate production-readiness end-to-end.
- •When would you use WebSockets vs SSE?
- •How do you deploy with zero downtime?
Phase 4: Operations, Security & Career Proof (Weeks 11–12)
Harden the capstone, prove operational readiness, and package the work for interviews and portfolio review.
- •Threat modeling and trust boundaries
- •Secrets handling, audit logs, and access control
- •Backups, restores, and rollback design
- •Runbooks, incident communication, and recovery drills
- •Harden the product with clear security and recovery decisions.
- •Document operational risks and a credible incident response plan.
- •Practice reliability work that goes beyond feature delivery.
- •Where is the highest-risk trust boundary in your system?
- •What does a rollback need beyond just old code?
- •Architecture storytelling and design docs
- •Metric-driven demos and project walkthroughs
- •Behavioral narratives anchored to shipped work
- •Portfolio writing, README quality, and reflection
- •Turn the capstone into visible proof of engineering maturity.
- •Explain architecture, tradeoffs, and impact clearly under pressure.
- •Leave the curriculum with an interview-ready artifact set.
- •What would a reviewer understand from your README in two minutes?
- •Which tradeoff decision from the capstone best demonstrates judgment?
Tip: pair Weeks 7-12 with System Design so the architecture concepts stick faster.