Skip to main content

Testing Modern Frontend: State, Async, Network, and UI

How to think about frontend testing strategy without becoming hostage to too many mocks, too many clicks, or too little confidence.

Andrews Ribeiro

Andrews Ribeiro

Founder & Engineer

Track

Senior Frontend Interview Trail

Step 4 / 15

The problem

Frontend testing becomes messy fast because many things happen at the same time:

  • local state
  • remote state
  • rendering
  • user events
  • requests
  • loading
  • retries
  • errors

If you do not separate those pieces mentally, the strategy degrades.

Then two bad extremes show up:

  • tiny tests that only confirm internal detail and do not protect the flow
  • giant tests that try to cover everything and become slow, fragile, and expensive

Mental model

Think about it this way:

modern frontend is not “just UI.” It is coordination between state, time, network, and interaction.

So the main question should not be:

“Which testing tool are we going to use?”

It is better to ask:

“Which risk am I trying to reduce in this part of the frontend?”

Examples:

  • data transformation logic
  • a component rendering the right states
  • integration with requests and cache
  • the complete user flow

Each of those calls for a different testing level.

Breaking the problem down

Pure state usually calls for cheaper tests

If there is a function that:

  • filters results
  • calculates totals
  • decides status
  • transforms an API response

that usually does not need the DOM or a browser.

A simple direct test is often enough.

It is cheap, fast, and helps a lot once the logic starts growing.

Component integration is where a lot of real confidence appears

A big part of the value in frontend is verifying:

  • loading appears when it should
  • error appears when it should
  • success updates the screen
  • user action triggers the right flow

That usually asks for a rendered component with its main dependencies, but without turning everything into e2e.

This is where many teams really mature.

Because they stop testing only loose functions and start testing visible behavior.

Network needs to be handled with judgment

Some teams mock everything so artificially that the test loses contact with real behavior.

Some teams use too much real networking and buy slowness and instability.

The balance point is often:

  • simulate the network at the level of the expected contract
  • test success, error, and loading
  • reserve real environments for a few essential flows

In other words, it is not “never mock” or “always mock.”

It is controlling what matters without lying about the system.

Async changes how the test needs to observe

In modern frontend, many failures come from having a synchronous expectation about something asynchronous.

For example:

  • the test asserts before the screen updates
  • it clicks and validates before the request ends
  • it waits for the wrong text instead of waiting for an observable state

The problem here is not only tooling.

It is mental model.

The test needs to follow the real rhythm of the system.

E2E has a place, but does not replace the rest

E2E is great for answering:

  • does the main flow work?
  • is the real integration between critical parts still alive?
  • does the most important business path still close?

But it is expensive to use e2e for everything.

When a team tries to push all confidence to the top of the pyramid, the cost comes back as slowness, flakiness, and maintenance burden.

Simple example

Imagine an order-search screen.

It has:

  • a search input
  • loading
  • a result list
  • an empty state
  • an error message

A reasonable strategy could be:

  1. test the function that normalizes filters and pagination
  2. test the component showing loading, success, empty, and error
  3. test that typing and submitting trigger the expected request
  4. keep one e2e covering the main search flow

A bad strategy would be:

  • test everything only through e2e

or:

  • test only isolated functions and declare victory

Neither extreme protects the product well.

Common mistakes

  • Writing tests coupled to state, props, or internal structure without need.
  • Ignoring loading and error states because the happy path already passed.
  • Mocking so artificially that the real flow disappears.
  • Putting too much trust in e2e to compensate for strategy gaps.
  • Using fragile selectors and blaming the tool when the test breaks.

How a senior thinks about it

People with more experience look at frontend in layers of risk.

They do not try to win on volume.

They try to position confidence better.

Common questions from someone stronger:

  • which part is pure logic?
  • which part needs the DOM?
  • which part depends on a network contract?
  • which flow deserves e2e because the impact is high?

That creates a cheaper suite to maintain and a much more useful one.

What the interviewer wants to see

In interviews, the evaluator wants to see whether you think in strategy, not only tooling.

Weak answer:

I would test with unit, integration, and e2e.

Better answer:

I would separate state logic from visible interaction. Transformation rules I would test in isolation. Important UI states I would validate with component tests. Critical business flows I would keep covered by a few e2e tests. And I would treat loading, error, and success as central parts of the strategy, not as details.

That shows:

  • sense of cost
  • sense of risk
  • understanding of real frontend

Testing frontend well is choosing where each type of confidence is worth the most.

When everything turns into e2e, the suite gets expensive. When everything turns into unit tests, the product stays exposed.

Quick summary

What to keep in your head

Practice checklist

Use this when you answer

You finished this article

Part of the track: Senior Frontend Interview Trail (4/15)

Previous article Old and New Versions Coexisting at the Same Time

Keep exploring

Related articles