Monday, May 11, 2026

Stop Wasting Time in Meetings: The Agile Working Agreement That Actually Works

Let’s be honest: we’ve all been there. The Daily Stand-up that devolves into a 45-minute debugging session. The Sprint Planning that drags on so long you forget what you were planning to build. Or the dreaded "Demo Day Disaster," where the Backend team hands over an API only for the Frontend team to realize the data structures don't match.

It doesn’t have to be this way. High-performing teams don’t just "do Scrum"—they build a Working Agreement.

Think of this as your team’s constitution. It isn’t about corporate bureaucracy; it’s about protecting your "flow state," respecting each other’s time, and ensuring that "Done" actually means "Shippable." Here is a battle-tested Working Agreement designed for modern, cross-functional teams.


1. Meeting Etiquette: Principles for Productive Flow

If you want productive ceremonies, you must start with a foundation for mutual respect.

  • The 2-Minute Rule: If a deep technical debate breaks out during a Stand-up, pause it. If it lasts longer than 120 seconds, move it to the "parking lot" to be discussed by the relevant parties immediately after the meeting.
  • Camera On (Remote Teams)? Cameras are mandatory for Sprint Planning and Retrospectives - these require high emotional intelligence and engagement. For the Daily Stand-up? Optional. We trust you’re focused.
  • Punctuality is Non-Negotiable: If Stand-up starts at 10:00 AM, we start at 10:00 AM. If you are late, you owe the team a "coffee debt" or a quick lightning talk on a topic of your choice at the next Retro.
  • No "Second-Screening": Put the distractions away. If you need to answer a Slack message, 

do it before or after the ceremony. Multitasking is the enemy of a 15-minute meeting.

Ceremony Cheat Sheet:

  • Sprint Planning: 2–4 hours (Once per Sprint)

  • Daily Scrum: 15 mins (Daily)

  • Backlog Refinement: 1-2 hours (1–2x per Sprint)

  • Sprint Review: 1-2 hours (Once per Sprint)

  • Retrospective: 1-1.5 hours (Once per Sprint)

2. Refinement vs. Planning (They Are Not the Same)

Teams often fail because they try to do two different jobs at once.

  • Backlog Refinement (The "Look Ahead"): This is not about committing to work; it is about "DEEP" grooming (Detailed, Estimated, Emergent, and Prioritized). Split this into two 1-hour sessions per sprint. This gives the Product Owner time to find answers to technical blockers before the actual Planning starts.
  • Sprint Planning (The "Commitment"): The team selects items from the already refined backlog.
  • The Golden Rule: If Sprint Planning takes 4+ hours, your Refinement failed. When Refinement is done well, Planning should be a smooth "select and commit" process that takes under 2 hours.

3. The Perfect 2-Week Sprint Calendar

For those who need a visual rhythm, here is your standard cadence:

  • Day 1 (Monday AM): Sprint Planning.

  • Daily: Stand-up (same time, every day).

  • Day 4 or 5: Refinement Session #1 (Mid-sprint check).

  • Day 8 or 9: Refinement Session #2 (Finalizing the "Ready" state).

  • Last Day (Friday PM): Sprint Review (Demo) → Immediately followed by Retrospective.

Pro-Tip: Use "Hard Stops." If a Retrospective is scheduled for 60 minutes, end it at 60 minutes. Timeboxing forces the team to prioritize the most important issues rather than venting about minor ones.

4. The Holy Grail: Unified Planning for Cross-Functional Teams

This is where most cross-functional teams break down. To stop the "Silo Effect" between Backend (BE) and Frontend (FE), adopt these three habits:

The "Interface-First" Approach

Before a single line of code is written, BE and FE engineers must agree on the API Contract (the JSON structure). Once this "handshake" is defined, the FE team can build using mock data while the BE builds the logic. No one waits for anyone.

The "One-Sprint-Ahead" Design Rule

UX/UI Design is not part of the current sprint’s development; it is part of Refinement. If a story’s design isn't finalized by Planning, that story is not "Ready" and does not enter the sprint.

The "Three Amigos" Flow

During planning, use this three-phase approach for every story:

  • Phase A (The Vision): The Product Owner explains the "Why" and "What."

  • Phase B (Technical Alignment): BE and FE engineers draft the contract together and identify architectural hazards.

  • Phase C (Sub-Tasking): Break the story into specific tasks (e.g., Task 1: Define API Contract; Task 2: Build logic; Task 3: Integration testing).

5. Shared Planning vs. Split Planning

Shared Planning (The Goal)

Split Planning (The Risk)

Early Integration: Interfaces are discussed upfront.

Late Integration: Bugs are discovered on Day 9.

High Context: Everyone understands the full stack.

Black Boxes: Teams don't know what the "other side" is doing.

Shared Accountability: The team wins or loses together.

Finger-Pointing: "The API is broken" vs. "The UI is wrong."

Parallel Work: FE uses mocks to start immediately.

Linear Work: FE waits for the BE to finish.

One final tip: If the Backend team needs a deep 10-minute architecture huddle that the Frontend doesn't need to hear, use a breakout group. Huddle for 10 minutes, then "regroup" and explain the final plan to the other half.

Summary

A good Working Agreement isn't about control—it’s a shield against chaos. It protects the developers' focus, the Product Owner’s roadmap, and the customer’s need for reliable software.

Your action item for tomorrow: Share this with your team. At your next Retrospective, ask: "Which one of these rules are we breaking the most?"

Fix that one thing first. Your velocity will thank you.

Thursday, April 2, 2026

Stop Starting. Start Finishing.

How to Make Your Agile Process More Predictable (and Save Your Sanity)

Let’s be honest for a second.

How often do you hear—or ask—these questions?

  • “When will this feature actually be done?”
  • “How many features can we ship next release?”
  • “Why does everything feel chaotic even though we’re ‘doing Agile’?”

If that sounds familiar, you’re not alone. But here’s the hard truth: most teams struggle with predictability not because they work slowly, but because they start too much.

It’s time to stop starting, start finishing.

Your Agile Process Is a Queue (Really)

Every Agile process—Kanban, Scrum, or hybrid—can be modeled as a simple queuing system:

Arrivals → System (work being done or waiting) → Departures

That’s it. But here’s the catch:
To make that system predictable, you need two crystal-clear moments:

  1. A clear arrival point (when work is truly “started”)
  2. A clear departure point (when work is truly “finished”)

Without those, you’re flying blind.

The Three Metrics That Matter

You don’t need 27 dashboards. You need three core metrics:

MetricWhat it means
WIP (Work in Progress)Count of items started but not finished
Cycle Time (CT)Time from start to finish for one item
Throughput (TH)How many items you finish per day/sprint

And one powerful formula:
WIP = CT × TH

Want shorter cycle times? Lower your WIP.
Want higher throughput? Lower your WIP.
WIP is the lever that controls almost everything.

Cycle Time vs. Age – A Critical Distinction

  • Cycle Time applies to finished items. It’s the total elapsed time from start to finish.
  • Age applies to unfinished items. It’s how long a started item has been sitting in progress.

Why does this matter?
Because aging is bad. Every day an item ages without finishing, you delay customer feedback. And delayed feedback is wasted learning.

Cycle time isn’t just a metric. It’s a measure of how fast you get validated feedback from real users.

Two Ways to Prevent Aging

You can’t stop aging once an item is in progress… unless you do one of two things:

  1. Finish it (obvious, but hard when you’re overloaded)
  2. Don’t start it (less obvious, but more powerful)

That second one is the secret sauce.
Don’t start work unless you are truly ready to finish it quickly.

This is why controlling WIP isn’t a micromanagement trick.
The real reason to control WIP is to prevent unnecessary aging.

Throughput, Not Velocity

A quick but important note:
Many teams track velocity (story points per sprint). But story points are subjective. Throughput is not.

Throughput = number of work items completed per unit of time
(e.g., “We finished 7 stories last sprint”)

Throughput is honest. It doesn’t care if a story was a 3 or an 8. It just counts finished work.

And throughput, combined with cycle time and WIP, gives you something priceless: predictable outcomes.

What Should You Track?

You don’t need a complex data science project. Start here:

  • WIP – Keep it small
  • Cycle Time – Measure it per epic/story/task
  • Throughput – Count finished items (epic/story/task) per sprint
  • Age – Watch for aging items like a hawk

When you control WIP, cycle time becomes reliable.
When cycle time is reliable, throughput becomes predictable.
When throughput is predictable, your team stops guessing and starts delivering.

The Bottom Line

You can’t answer “when will it be done?” with wishful thinking.
You answer it with data. And that data comes from finishing work, not starting more.

So take a hard look at your board today.
How many items are “in progress”?
How many have been sitting there for over a week?
How many did you start but never finished?

Then say it out loud:

Stop starting. Start finishing.

Your team (and your product owner) will thank you.


Want to dig deeper? Start tracking your WIP, cycle time, and throughput for two sprints. You’ll be amazed at what you learn.

Tuesday, December 30, 2025

Beyond Unit Tests: Building a Reliable Test Suite for Modern Systems

We’ve all heard it: “You should write tests.” And it’s true. Writing unit tests or a few acceptance checks is a good first step. But in today’s complex software landscape, it’s simply not enough. What separates effective teams from the rest isn’t just writing tests—it’s having a deliberate, scalable testing strategy.

Let’s break down what that really means.

The Anatomy of a Good Test

At its core, every automated test follows a simple script: precondition, action, postcondition. You set up a meaningful state, perform an operation, and verify the outcome. Yet, it’s surprisingly easy to write tests that barely scratch the surface of what your system can do.

So, what elevates a test from “written” to worthwhile? A high-quality test suite embodies these essential attributes:

  • Behavior-Revealing: Tests should act as live documentation. They clarify how the system actually behaves, not just that it compiles.
  • Discoverable: When you modify code, finding the related tests should be intuitive and fast.
  • Valuable: Does the test verify something important, or is it just exercising the framework? Avoid tests that merely prove your database can save data.
  • Fast: Slow tests drain productivity. They encourage context-switching and become a bottleneck. (Periodically, teams should prioritize speeding up their test suite.)
  • Clean: A test should leave no trace. "Test pollution"—lingering side effects—can cause tests to fail unpredictably when run in different orders.
  • Reliable: Flaky tests are toxic. They erode trust in your CI/CD pipeline and slow down merges. A reliable test gives the same result every time for the same configuration.
  • Parallelizable: Clean, side-effect-free tests can run in any order and in parallel, drastically cutting feedback time.
  • Revealing: A test failure should clearly point toward the broken component or assumption.
  • Accurate: Green should mean "it works," and red should mean "there's a real bug." Minimize false positives and false negatives.

The Distributed Systems Challenge

This becomes far more complex in a world of microservices and distributed systems. Multiple services must collaborate seamlessly, and testing their interactions is a major hurdle.

Consider API evolution: if you adopt a Specification-First design with multiple consumers, you’re quickly faced with versioning complexity to avoid breaking changes.

An alternative is Consumer-Driven Contracts (CDC), which flips the script. Here, the consumers of an API define their expectations in a "contract," and the provider agrees to fulfill them. This leads us to a more efficient testing paradigm.

Enter Contract Testing: The Integration Game-Changer

Contract testing is the practical application of CDC. It allows you to test service integrations one at a time, without deploying the entire system or relying on fragile, full-stack environments.

Key benefits:

  • Focus: Verify the contract between a specific consumer and provider.
  • Speed: Runs on a developer's machine, providing feedback as fast as unit tests.
  • Independence: Reduces the need for complex, slow, and flaky end-to-end integrated tests.
  • Safety: Enables teams to release microservices independently and with greater confidence.

Introducing Pact: Streamlining Contract Testing

Pact is a powerful, open-source tool that makes consumer-driven contract testing straightforward. It helps teams:

  1. Define clear contracts between services.
  2. Test providers and consumers in isolation.
  3. Eliminate the heavy dependency on integrated test environments.
  4. Build a more balanced and efficient Testing Pyramid, shifting weight away from the brittle top layer of E2E tests.

By investing in a tool like Pact, you move beyond just writing tests—you build a safety net that scales with your architecture.

The Bottom Line

A collection of unit tests is a start (TDD Practices would be much better - more on that later). A strategy built on fast, reliable, clean tests—augmented by practices like contract testing for integrations—is what allows teams to move quickly and confidently. It’s the foundation for sustainable velocity in a microservices world.



What’s your experience with testing distributed systems? Have you tried contract testing or tools like Pact? Share your thoughts in the comments below.

Tuesday, December 2, 2025

The Secret Architecture of Your Org Chart: Why Feature Teams Build Better Software

In a previous article, "Why Feature Teams Beat Siloed Development: Lessons from a Cloud Migration," I discussed moving away from functional silos toward cross-functional units. Today, let's dive deeper into a more fundamental, architectural reason for organizing as feature teams—one that directly shapes the software you build.

The simple truth is this: success in software development is directly related to how individuals coalesce into a true team. Assigning names to a group and giving them tasks doesn’t magically create one.

What Makes a Team, Anyway?

A team is:

  • A group that has matured enough to be effective.
  • A collection of people who share goals and interact consistently to perform tasks.

If your "team" is constantly in conflict, operates with distrust, or doesn't feel united, it's not a team. If members rarely interact or pursue different—or hidden—agendas, they aren't even a group.

Effective software teams balance design, technical, and business knowledge. Location is secondary; regular, meaningful interaction toward common goals is what matters. Remote and hybrid teams can thrive, but they often require intentional facilitation to reach the maturity needed for intense collaboration. And size is a factor—most high-performing teams range from 4 to 8 members. Beyond that, you need to divide and conquer.

The Hidden Force That Shapes Your Code

But why structure teams around features or products in the first place? The answer lies in a principle that quietly governs how organizations build software.

In 1964, architect Christopher Alexander—whose work inspired software design patterns—argued that stable systems should be split into components with high internal cohesion and low external coupling. The human tendency, however, is to organize around familiar labels rather than optimal structure.

Then, in 1968, Mel Conway made a pivotal observation, later popularized by Fred Brooks as Conway’s Law:

“Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.”

In practice, this means developers often structure software to mirror their org chart, not architectural best practices. If you have a "security team," an "engine team," and a "UI team," you’ll likely end up with a security module, an engine module, and a UI module—whether or not that creates a secure, stable, or well-designed system.

Turning Conway’s Law to Your Advantage: The Inverse Conway Maneuver

Here's where it gets powerful. Instead of letting your organization accidentally dictate your architecture, you can flip the script. This is called the Inverse Conway Maneuver.

You deliberately design your team structure to produce the software architecture you want.

Organize teams around the components and boundaries you desire in your system. Give them names that reflect what those components actually do. Want a clean, well-defined interface between two services? Put them under separate, autonomous teams. Building a platform with a plugin architecture? Create a core platform team and dedicated teams for each major plugin.

By aligning your team topology with your target architecture, you guide the system toward better cohesion, cleaner separation of concerns, and more sustainable design.

Feature Teams as an Architectural Strategy

So when we advocate for feature teams, we’re not just talking about efficiency or morale—we’re talking about software design. A well-structured feature team, aligned to a clear product area or customer journey, naturally builds software with strong internal cohesion and intentional boundaries to other domains.

This is the intrinsic, architectural reason to move beyond silos. It’s not just about working better together—it’s about building better systems, intentionally.

Your org chart isn't just a list of names and titles. It's the blueprint of your software's future architecture. Design it wisely.

Have you seen Conway’s Law play out in your organization? How have you structured teams to shape better software? Share your story in the comments.

Monday, November 17, 2025

Beyond the Code Review: Why Pair Programming is the Ultimate Training Ground for Engineers

A question from leadership, that I decided to best answer with this article: “What’s the most effective way to train and improve our software engineers' skills?”

It’s a critical question. We all know that expertise isn't created overnight—you can’t fast-track a graduate to a senior role. The real challenge is how to systematically guide an engineer to consistently produce high-quality, architecturally sound code that meets both functional and non-functional requirements, all while minimizing bugs.

While processes and tools play a role, I believe the most powerful tool is often the most human one: Pair Programming.

The Limits of Going Solo and the Late-Stage Code Review

Traditional onboarding and skill development often rely on independent work, followed by a code review. While reviews are valuable, they are inherently a reactive process. By the time a pull request is opened, the design is solidified, the code is written, and the mental energy has been spent. A reviewer can spot bugs and suggest improvements, but they can’t easily influence the architectural decisions as they are being made.

This is where pair programming changes the game.

The Pair Programming Advantage: Two Brains, One Goal

Pair programming isn't just two people sharing a keyboard. It’s a dynamic, collaborative process where one person (the “Driver”) writes code while the other (the “Navigator”) reviews each line, thinks strategically, and keeps the big picture in focus.

The benefits for skill development are profound:

  • Real-Time Design and Critique: The most significant advantage is the ability to shape the design as it happens. The navigator can challenge assumptions, ask "what if?" questions, and propose alternative approaches instantly, leading to more robust solutions from the start.
  • Early Bug Catcher: With a second set of eyes on every line of code, typos, logical errors, and potential edge cases are caught immediately—long before they become bugs in a test environment.
  • Unmatched Mentoring and Knowledge Sharing: This is the ultimate training engine. Pairing a senior engineer with a junior one isn’t a lecture; it’s an immersive, hands-on apprenticeship. Similarly, pairing a domain expert with a newcomer is the fastest way to spread crucial business knowledge. So much tacit knowledge—about codebase quirks, debugging techniques, and team conventions—is transferred naturally in conversation.

What About AI? Copilot Changes the Game, But Not the Rule.

In the age of AI assistants like GitHub Copilot, a valid question arises: Doesn't an AI partner make human pairing obsolete?

The answer is a resounding no. In fact, my experience is that AI pairing reinforces the need for human collaboration.

Think of Copilot as a brilliant, autocomplete-on-steroids. It's fantastic at suggesting code, boilerplate, and common patterns based on the vast data it was trained on. But it lacks context, intent, and critical reasoning.

  • Copilot can't challenge your architectural decisions. It can't ask, "Why are we building it this way?" or "Have you considered the long-term maintenance cost of this approach?"
  • Copilot doesn't understand our business domain. It can't say, "Wait, the billing rules changed last quarter, this logic is outdated."
  • Copilot doesn't mentor a junior engineer. It can't explain why one solution is better than another or share war stories about what failed in the past.

When you pair with a human while using Copilot, you get the best of both worlds. The AI handles the boilerplate and accelerates keystrokes, freeing up the human pair to do what they do best: think, reason, design, and teach. The conversation shifts from "what code do we write?" to "is this the right code, and does it solve our actual problem?"

A Note from the Trenches: It’s Not All Easy

Having worked in an Extreme Programming (XP) shop where no code was written without a pair, I’ve seen both the immense benefits and the real challenges.

Pair programming requires sustained effort, a collaborative mindset from all engineers, and strong organizational support. It can be mentally taxing, and doing it "all the time" isn't always practical or necessary. The key is to use it strategically for complex features, onboarding, and tackling gnarly legacy code.

The Bottom Line

If your goal is to accelerate growth, improve code quality, and build a deeply connected, knowledgeable engineering team, pair programming is one of the most powerful investments you can make. It transforms skill development from a passive, solitary activity into an active, collaborative journey. And while AI tools are incredible force multipliers, they enhance—rather than replace—the irreplaceable value of human collaboration.

Ready to dive deeper? I highly recommend the comprehensive Pair Programming Guide from Tuple for practical tips and best practices.

Wednesday, October 29, 2025

The Mini-Box Pattern: A Pragmatic Path to Resilient Event-Driven Architecture

In modern microservices, the ability to reliably react to changes is paramount. Event-driven architectures, often powered by message brokers like Google Pub/Sub, promise loose coupling and scalability. But this power comes with a significant challenge: ensuring that processing an event and notifying others about it is a reliable operation.

In this post, we'll explore a common pitfall in event-driven systems and introduce a simple, evolutionary pattern we call the "Mini-Box Pattern," a pragmatic stepping stone to the well-known Outbox Pattern.

The Dream: Seamless Event-Driven Communication

In our ideal system, events flow seamlessly, driving business processes forward. We primarily see two scenarios:

  • Scenario 1: Event Handler Chains. A microservice receives a change event from Pub/Sub, updates its own database, and produces a new notification event for other interested services to consume.
  • Scenario 2: API-Driven Events. A REST API updates the database and, as a side effect, must produce a notification event (e.g., for an audit service or to update a read model).

In both cases, the service must reliably do two things: update its database and send a new event.

The Nightmare: The Non-Atomic Reality

The core reliability problem stems from a simple fact: database transactions and network calls are not atomic.

A service must only acknowledge (ACK) the initial event or API request once both the database write and the new event publication are successful. If either fails, it should negatively acknowledge (NACK) to force a retry.

But consider this failure scenario:

  1. The service successfully commits its database transaction.
  2. It then tries to publish the resulting event to Pub/Sub, but a network partition occurs.
  3. The publication fails. The service must NACK the original event.
  4. The original event is retried, leading to a duplicate database update.

This is a classic "at-least-once" delivery problem, but it's compounded by the fact that the two critical operations can't be grouped. Even with robust retry logic and exponential backoff, the retries themselves can cause timeouts, leading to unnecessary NACKs and system instability.

We needed a way to break the tight, unreliable coupling between the database transaction and the event publication.

The Goal: The Transactional Outbox Pattern

The definitive solution to this problem is the Transactional Outbox Pattern. In this pattern, the outgoing event is stored as part of the same database transaction that updates the business data. A separate process then relays these stored events to the message broker.

This ensures atomicity—the event is guaranteed to be persisted if the transaction commits. However, implementing the full outbox pattern, with a reliable relay service, can be a significant undertaking.

The Bridge: Introducing the Mini-Box Pattern

Faced with time constraints but an urgent need for improved resilience, we designed an intermediate solution: the Mini-Box Pattern.

This pattern gives us the core durability benefit of the outbox without immediately building the asynchronous relay component. It's a pragmatic compromise that buys us critical time and creates a foundation we can evolve.

How the Mini-Box Pattern Works

The key is to treat the outgoing event as data first, and a message second.

  1. Transactional Write: Inside the same database transaction that handles the business logic:
    • The business data is updated.
    • The outgoing event payload is serialized (e.g., to JSON) and inserted into a dedicated outbox_messages table.
  2. Best-Effort Publish: After the transaction successfully commits, the service attempts to publish the event to Pub/Sub as before.
  3. The Safety Net: This is where the magic happens.
    • On Success: The event is sent, and the source is ACK'd. We have a record of the event in our database for potential debugging or replay.
    • On Failure: If the Pub/Sub call fails, the event is already safely stored. We can NACK the original request without fear of losing the event. Our system can now alert us that there are "stranded" messages in the outbox_messages table, which can be replayed manually via a console or script.

This approach decouples the fate of our event from the transient failures of the network. The synchronous part of our operation (the database transaction) now captures the full intent of the operation, including the need to notify others.

Our Implementation Plan

Adopting the Mini-Box pattern involved a clear, staged plan:

  1. Design the Foundation: Create the outbox_messages table via a Liquibase script.
  2. Refactor the Core: Update our shared publishing library to perform the dual write: first to the database table, then to Pub/Sub.
  3. Integrate Across Services: Roll out the updated library to all our REST APIs and event handlers. This work was parallelized across the team.
  4. Test Rigorously: Conduct end-to-end performance and integration tests to ensure the new flow was stable.
  5. Implement Alerting: Set up monitoring and alerting to notify us when messages fail to publish and land in the table.
  6. Evolve: This table and process are perfectly positioned to be evolved into a full Outbox Pattern by building a relay service that polls this table and publishes the events.

A Glimpse at the Code

The core logic is surprisingly straightforward. Here's a simplified conceptual example:

// Inside the service method, within a transaction
public void processOrder(Order order) {

    // 1. Update business data
    orderRepository.save(order);

    // 2. Serialize and persist the event within the SAME transaction
    OrderCreatedEvent event = new OrderCreatedEvent(order);
    String serializedEvent = objectMapper.writeValueAsString(event);
    outboxRepository.save(new OutboxMessage(serializedEvent, "order-topic"));
    
    // Transaction commits here. If it fails, everything rolls back.
}

// After the transaction, attempt to publish
try {
    pubSubPublisher.publishAsync(event);
} catch (PublishException e) {
    // The event is safe in the database! We can alert and replay later.
    logger.warn("Publish failed, but event is persisted to outbox. Manual replay required.", e);
}   //opt

Conclusion

The Mini-Box Pattern is a testament to pragmatic engineering. It acknowledges that while perfect, definitive solutions are excellent goals, sometimes the best path is an evolutionary one.

By making a small architectural change—treating events as data first—we dramatically increased the resilience of our system without a massive upfront investment. We've bought ourselves time, reduced operational anxiety, and built a solid foundation for the future. If you're struggling with event reliability but aren't ready for a full outbox implementation, the Mini-Box might be the perfect bridge for you.

Have you faced similar challenges? What interim patterns have you used? Share your thoughts in the comments below!