Overview

HighPerformanceReact.dev

A collection of tips and tricks to help you write high performance React code, from the obvious to the obscure.

My name is Tom and I’ve spent the better part of a decade building React apps. I also wrote The Computer Science Book, a complete introduction to computer science in one book for self-taught developers.

Three rules for writing high-performance code:

  1. First make the code work.
  2. Identify which code is harming performance.
  3. Optimise and measure the outcome.

Do not start optimising before you have working code and evidence of a problem. Premature optimisation is a trap. Profile first, then act.

Where I reference React source code, I’m using commit 9cdf8a9 unless otherwise noted.

Prerequisites

Before you start optimising, you need to understand what React is actually doing and how to measure it. This section covers both.

# Understand the render cycle

React updates components in three steps:

  1. Render: React calls your component function and gets a React Element back.
  2. Reconciliation: React compares these Elements with the previous render’s Elements.
  3. Commit: If anything changed, React updates the DOM.

The important thing here – and where a lot of confusion comes from – is that rendering does not mean updating the DOM. Rendering is React calling your function. It’s cheap-ish, because React was designed on the assumption that calling your function is fast but updating the DOM is slow.

The catch is that when a component renders, all of its children render too. So one unnecessary render at the top of a large component tree can cascade into thousands of function calls.

Most React performance work comes down to two things: speeding up slow renders and eliminating unnecessary ones.

# Know what triggers a render

A component will re-render when:

  1. Its parent re-renders. This is the default. If a parent renders, every child renders too – regardless of whether the child’s props changed. This surprises a lot of people. React does not compare props by default; it just re-renders the whole subtree.
  2. Its state changes. Calling setState or a useState setter schedules a re-render.
  3. A context it consumes changes. Any component that calls useContext on a context will re-render when that context’s value changes.
  4. A hook schedules an update. Hooks like useReducer, useSyncExternalStore, or custom hooks can also schedule re-renders.

The first point is the one that catches people. You might assume that React is clever enough to skip a child whose props haven’t changed – but it isn’t, unless you explicitly tell it to using React.memo (see memoising function components).

This is a deliberate design decision. Comparing props has a cost, and React would rather do a cheap re-render than an expensive comparison that might not save any work. For most components, this is the right tradeoff. For the ones where it isn’t, that’s what this guide is for.

# Profile your app

Before you change anything, measure. You have two main tools:

Your browser’s JS profiler gives detailed information about raw JavaScript execution. It sees everything but knows nothing about React specifically.

The React DevTools Profiler knows which components rendered, how long they took, and (sometimes) why. It’s the one you’ll reach for most often.

Recording a profile

  1. Install React Developer Tools and open the Profiler tab in your browser’s dev tools.
  2. Click the gear icon, go to the Profiler tab, and enable Record why each component rendered while profiling.
  3. Hit the red record button, interact with your app, then stop recording.

Reading a profile

The top right shows a list of commits – each time React updated the DOM. Taller bars mean longer commits.

The main pane shows which components rendered during the selected commit. The root is at the top, leaves at the bottom. Spend a moment mapping this to your actual component tree.

Bar width indicates how long that component and its children took to render. Hotter colours (yellow, red) mean slower renders. Grey means the component didn’t render at all in that commit.

Hover over a component and the tooltip will explain why it rendered. This is super useful – but don’t trust it blindly. The profiler uses The parent component rendered. as its default explanation, which means it falls back to that message when it doesn’t have better information. You’ll find cases where it claims a parent re-rendered when it plainly didn’t.

When the profiler tells you something that doesn’t add up, dig deeper. Check what hooks the component uses, what context it consumes, and what props are actually changing. The debugging snippets at the end of this guide can help.

The Basics

Techniques that every React developer should know. If you’re already familiar with these, skip ahead – but make sure you’ve actually implemented them before reaching for anything fancier.

# Use stable, unique keys

Keys are special props that tell React which items in a list are which. React uses them during reconciliation to figure out what changed, what was added, and what was removed. If you see key warnings in your console, fix them – they’re telling you that React is doing far more work than it needs to.

Here’s why. Imagine a NavBar:

<NavBar>
  <NavItem path="/about" label="About" />
  <NavItem path="/buy-my-book" label="Buy My Book!" />
</NavBar>

You add a conditional item at the top for logged-in users:

<NavBar>
  {user.loggedIn && <NavItem path="/my-account" label="My Account" />}
  <NavItem path="/about" label="About" />
  <NavItem path="/buy-my-book" label="Buy My Book!" />
</NavBar>

Without keys, React compares elements by position. When “My Account” appears, it compares the new first element (My Account) with the old first element (About), finds they don’t match, and tears them both down. It then compares new-second (About) with old-second (Buy My Book) – again, no match. React ends up recreating every single NavItem, even though two of them haven’t changed at all.

With stable, unique keys, React can see that About and Buy My Book are the same items as before and only needs to create the one new item.

A couple of things to watch for:

  • Keys must be on the directly-rendered element. If you wrap your NavItems in another component, the key goes on the wrapper, not the NavItem inside it. It’s easy to get this wrong during refactoring.
  • Don’t use array indices as keys if the list can reorder, insert, or delete items. Index-based keys will cause React to match up the wrong items and do unnecessary (sometimes destructive) work – you’ll get subtle bugs where component state ends up attached to the wrong item.

# Use key to reset component state

Most people think of keys as a list thing. They’re not – they work on any component, and changing a key tells React to destroy the old instance and create a fresh one.

This is useful when you want to reset a component’s internal state without writing reset logic. Say you have an edit form that should reset when the user selects a different item:

const ItemEditor = ({ item }) => {
  const [draft, setDraft] = useState(item.text);
  // ...
};

// Without key: switching items leaves stale state in the form
<ItemEditor item={selectedItem} />

// With key: React creates a fresh ItemEditor for each item
<ItemEditor key={selectedItem.id} item={selectedItem} />

Without the key, React sees the same component type in the same position and reuses the existing instance – including its state. Your form will show the previous item’s text. With the key, React treats each item ID as a completely different component and mounts a fresh one, so useState runs its initialiser again with the correct value.

This is cleaner than the alternative – adding a useEffect to watch for prop changes and manually reset state – because it lets React’s existing lifecycle handle it. No synchronisation bugs, no stale state.

The tradeoff is that changing the key unmounts and remounts the entire subtree, which is more expensive than a simple re-render. For lightweight forms this doesn’t matter. For components with expensive mount logic (data fetching, heavy DOM manipulation), consider whether the cost is worth it.

# When index keys are actually better

The standard advice is “never use array indices as keys.” But there’s a case where index keys are genuinely preferable: stateless lists where the entire dataset changes at once.

Think about paginated search results:

// With unique keys: React unmounts all 20 items, mounts 20 new ones
<ResultList>
  {results.map(r => <ResultItem key={r.id} result={r} />)}
</ResultList>

// With index keys: React updates the 20 existing DOM nodes in place
<ResultList>
  {results.map((r, i) => <ResultItem key={i} result={r} />)}
</ResultList>

When the user clicks “page 2”, the entire results array changes. With unique keys, React sees 20 items disappearing and 20 new items appearing – it unmounts every ResultItem and mounts fresh ones. With index keys, React sees 20 items in the same positions and just updates their props, reusing the existing component instances and DOM nodes.

Reusing is significantly faster than destroying and recreating, especially if the items involve any non-trivial DOM (images, iframes, complex layouts).

This only works when the items are stateless – they don’t hold internal state that would get confused by receiving different data. And it only helps when the list changes completely rather than having individual items inserted, removed, or reordered. For those operations, unique keys are still essential.

Autocomplete dropdowns, search results, paginated tables, log viewers – anywhere the list content is replaced wholesale and items don’t hold state – consider index keys.

# Memoise with React.memo, useMemo, and useCallback

React executes your entire component function every time it re-renders. Memoisation gives you tools to skip unnecessary work.

React.memo – skip re-renders when props haven’t changed

Wrapping a component in React.memo tells React to compare the previous props with the new ones before rendering. If nothing changed, React skips the render entirely.

const ExpensiveList = React.memo(({ items, onSelect }) => {
  return items.map(item => (
    <ListItem key={item.id} item={item} onSelect={onSelect} />
  ));
});

Without React.memo, this component re-renders every time its parent does – even if items and onSelect are identical. With it, React does a shallow comparison of each prop and skips the render when they match.

The key word there is shallow. React.memo uses === for each prop. That means it compares object references, not values. Two objects with identical content will still fail the check if they’re different objects in memory. This is the single most common reason people add React.memo and wonder why it isn’t helping – see referential equality.

useMemo – cache expensive computations

useMemo caches the result of a computation and only re-runs it when its dependencies change:

const Chart = ({ data, threshold }) => {
  const filtered = useMemo(
    () => data.filter(d => d.value > threshold),
    [data, threshold]
  );
  return <LineChart data={filtered} />;
};

Without useMemo, the filter runs on every render, even if data and threshold haven’t changed. For small arrays this is fine. For ten thousand data points with a complex filter, it adds up.

useCallback – stabilise function references

Functions defined inside a component are recreated every render. That’s normally harmless, but it breaks React.memo – each new function is a different reference, so the prop comparison fails.

useCallback returns the same function reference as long as its dependencies haven’t changed:

const Parent = ({ items }) => {
  const handleSelect = useCallback((id) => {
    console.log('selected', id);
  }, []);

  return <ExpensiveList items={items} onSelect={handleSelect} />;
};

useCallback is only useful when the function is passed to a memoised child. If the child doesn’t use React.memo, stabilising the reference achieves nothing – the child re-renders regardless.

When not to bother

Memoisation has a cost: memory for the cached values, and the comparison itself on every render. For cheap components that render quickly, the overhead of memoisation can exceed the cost of just re-rendering. Profile before you memoise.

# shouldComponentUpdate and PureComponent

For class components. If you’re only using function components, React.memo is the equivalent – see above.

By default, React re-renders class components whenever their parent renders. The shouldComponentUpdate lifecycle method lets you intercept this and return false to skip the render:

class UserCard extends React.Component {
  shouldComponentUpdate(nextProps, nextState) {
    return nextProps.userId !== this.props.userId;
  }

  render() {
    return <div>{this.props.userId}</div>;
  }
}

This only works cleanly when the component’s output is a pure function of its props and state. If it has side effects or reads from external sources, bailing out of a render can cause stale UI.

PureComponent

PureComponent does a shallow equality check on all props and state automatically. It doesn’t actually implement shouldComponentUpdate – it achieves the same effect via a separate code path in the reconciler – but the outcome is the same.

class UserCard extends React.PureComponent {
  render() {
    return <div>{this.props.userId}</div>;
  }
}

The shallow comparison means PureComponent checks each prop with ===. This has two implications:

  • Passing new object references with the same values will cause unnecessary re-renders. See referential equality.
  • Using mutable data structures will cause missed re-renders, since mutating an object doesn’t change its reference.

PureComponent works best with immutable data. If you’re using Redux or a similar immutable store, it’s a straightforward win.

Intermediate

These are common causes of unnecessary re-renders in real codebases. They frequently show up once you start profiling and are usually straightforward to fix.

# Understand referential equality

This is probably the single most common source of unnecessary re-renders in React applications. You wrap a component in React.memo, pat yourself on the back, and then discover it’s still re-rendering every time. The culprit is almost always a prop that looks the same but isn’t – because it’s a new object or function reference each render.

Inline functions

const MemoButton = React.memo(Button);

// Bad: new function reference every render
<MemoButton onClick={event => handleClick(event)} />

// Good: stable reference
<MemoButton onClick={handleClick} />

The lambda in the first example creates a new function every time the parent renders. Each function is a different reference, so React.memo’s shallow comparison says “prop changed” and re-renders.

In this case the wrapper is pointless anyway – just pass the function directly. When you need to pass extra arguments, use useCallback:

const MemoButton = React.memo(Button);

// Bad: new function for each button, every render
const Toolbar = ({ onClick }) => (
  <>
    <MemoButton onClick={e => onClick(e, "save")} />
    <MemoButton onClick={e => onClick(e, "delete")} />
  </>
);

// Good: stable callbacks via intermediate component
const ActionButton = React.memo(({ onClick, action }) => {
  const handleClick = useCallback(
    e => onClick(e, action),
    [onClick, action]
  );
  return <Button onClick={handleClick} />;
});

const Toolbar = ({ onClick }) => (
  <>
    <ActionButton onClick={onClick} action="save" />
    <ActionButton onClick={onClick} action="delete" />
  </>
);

Object and array literals

// Bad: new object reference every render
<MemoChart style={{ color: "red" }} />

// Good: defined once, same reference forever
const CHART_STYLE = { color: "red" };
<MemoChart style={CHART_STYLE} />

Every { color: "red" } in JSX creates a brand new object. Even though the content is identical, === compares references, not values – so React.memo sees it as a change.

The same applies to array literals. [1, 2, 3] in JSX is a new array every render.

Function.prototype.bind

Don’t bother trying onClick={this.handleClick.bind(this, "save")} to work around this. bind returns a new function every time, so it fails the equality check just like a lambda would.

# Move constants out of components

Any value that doesn’t depend on props or state should be defined outside the component body:

// Bad: recreated every render
const Sidebar = ({ items }) => {
  const styles = { width: 240, background: "#f5f5f5" };
  const defaultItems = ["Home", "Settings"];
  return <Nav style={styles} items={items || defaultItems} />;
};

// Good: created once at module level
const SIDEBAR_STYLES = { width: 240, background: "#f5f5f5" };
const DEFAULT_ITEMS = ["Home", "Settings"];

const Sidebar = ({ items }) => {
  return <Nav style={SIDEBAR_STYLES} items={items || DEFAULT_ITEMS} />;
};

There are two problems with defining constants inside a component. First, the object is recreated on every render – a small amount of wasted computation that adds up across many components. Second, each new object has a different reference, which defeats React.memo on any child receiving it as a prop.

When a value depends partly on props or state, use useMemo to keep the reference stable between renders where the inputs haven’t changed:

const Sidebar = ({ width }) => {
  const styles = useMemo(
    () => ({ width, background: "#f5f5f5" }),
    [width]
  );
  return <Nav style={styles} />;
};

# Don’t create components inside render

Defining a component inside another component’s render path is a subtle but devastating performance mistake:

const Dashboard = ({ onClick }) => {
  // Don't do this!
  const Panel = ({ children }) => (
    <div className="panel" onClick={onClick}>{children}</div>
  );

  return (
    <Panel>
      <ExpensiveChart />
    </Panel>
  );
};

Every time Dashboard renders, it creates a new Panel function. React compares component types by reference, so each render produces what React considers an entirely new component. React will unmount the old Panel and mount a fresh one – destroying all DOM state, losing focus, resetting animations, and re-running every child’s mount logic.

This isn’t just an unnecessary re-render. It’s a full teardown and rebuild of the entire subtree.

The fix is to define the component at module level:

const Panel = ({ onClick, children }) => (
  <div className="panel" onClick={onClick}>{children}</div>
);

const Dashboard = ({ onClick }) => (
  <Panel onClick={onClick}>
    <ExpensiveChart />
  </Panel>
);

Higher-order components have the same problem

The less obvious version of this mistake happens with HOCs:

// Bad: creates a new component type every render
const Dashboard = ({ userId }) => {
  const EnhancedChart = withData(Chart);
  return <EnhancedChart userId={userId} />;
};

// Good: HOC applied once at module level
const EnhancedChart = withData(Chart);

const Dashboard = ({ userId }) => (
  <EnhancedChart userId={userId} />
);

Any call that returns a component type – HOCs, React.memo(), React.forwardRef() – must happen outside the render path.

# Don’t change element type conditionally

When React encounters a different element type in the same position, it tears down the old tree and builds a new one from scratch. This applies to both HTML tags and component types.

// Bad: toggling between tags destroys and recreates the subtree
const Container = ({ isInline, children }) => {
  if (isInline) {
    return <span className="container">{children}</span>;
  }
  return <div className="container">{children}</div>;
};

Every time isInline toggles, React sees a different element type (span vs div) and unmounts the entire subtree – including all child components and their state. It then mounts a completely new subtree. If children contains anything expensive, this is painful.

The fix depends on what you’re trying to achieve. If it’s just a styling difference, use a single element and change the style:

const Container = ({ isInline, children }) => (
  <div
    className="container"
    style={{ display: isInline ? "inline" : "block" }}
  >
    {children}
  </div>
);

The same principle applies to component types. Toggling between <UserCard> and <AdminCard> in the same position will unmount and remount, losing all state. If they share a common structure, consider a single component with conditional rendering inside it rather than two separate component types.

# Prefer composition to memoisation

Before reaching for React.memo, consider whether restructuring your components can avoid the problem entirely. Two patterns are particularly effective.

Move state down

If only part of a component depends on frequently-changing state, extract that part:

// Before: entire component re-renders on every keystroke
const Page = () => {
  const [query, setQuery] = useState("");
  return (
    <div>
      <input value={query} onChange={e => setQuery(e.target.value)} />
      <ExpensiveTree />
    </div>
  );
};

// After: only SearchInput re-renders
const SearchInput = () => {
  const [query, setQuery] = useState("");
  return <input value={query} onChange={e => setQuery(e.target.value)} />;
};

const Page = () => (
  <div>
    <SearchInput />
    <ExpensiveTree />
  </div>
);

By pushing the state into SearchInput, the parent Page no longer re-renders when the query changes, so ExpensiveTree is left alone. No memoisation needed.

Lift content up

When the state has to live in the parent, pass the expensive content as children:

// Before: ExpensiveTree re-renders every time isOpen changes
const Layout = () => {
  const [isOpen, setIsOpen] = useState(false);
  return (
    <div>
      <Sidebar isOpen={isOpen} onToggle={() => setIsOpen(o => !o)} />
      <ExpensiveTree />
    </div>
  );
};

// After: ExpensiveTree is created by the grandparent, unaffected by Layout's state
const Layout = ({ children }) => {
  const [isOpen, setIsOpen] = useState(false);
  return (
    <div>
      <Sidebar isOpen={isOpen} onToggle={() => setIsOpen(o => !o)} />
      {children}
    </div>
  );
};

const App = () => (
  <Layout>
    <ExpensiveTree />
  </Layout>
);

When Layout re-renders due to a state change, children is the same JSX element reference that was passed in by App. React sees the same element and skips re-rendering ExpensiveTree.

This works because the children prop is created by App, not by Layout. Since App didn’t re-render, the children reference is stable.

Both patterns are preferable to memoisation because they have zero runtime cost – no comparisons, no cached values, no dependency arrays to get wrong. They’re structural solutions rather than band-aids.

# useCallback without React.memo is wasted work

This is one of the most common performance anti-patterns in React codebases. Developers learn that useCallback “prevents unnecessary re-renders” and sprinkle it everywhere:

const Parent = () => {
  const [count, setCount] = useState(0);

  // Wrapped in useCallback "for performance"
  const handleClick = useCallback(() => {
    console.log("clicked");
  }, []);

  // But Child isn't wrapped in React.memo...
  return <Child onClick={handleClick} />;
};

const Child = ({ onClick }) => {
  return <button onClick={onClick}>Click</button>;
};

This useCallback achieves nothing. When Parent re-renders, Child re-renders too – because that’s what React does by default. The stable callback reference is irrelevant because nobody is checking whether the reference changed.

useCallback only helps when the child is wrapped in React.memo:

const Child = React.memo(({ onClick }) => {
  return <button onClick={onClick}>Click</button>;
});

Now the stable reference matters, because React.memo compares the previous and new onClick with ===.

Without React.memo on the receiving end, useCallback is pure overhead – it’s doing a dependency comparison on every render for no benefit. The overhead is small per call, but across hundreds of components it adds up. Worse, it makes the code harder to read for no reason.

Audit your useCallback usage. For each one, check: is the function passed to a memoised component? If not, remove it.

Advanced

Expert-level techniques. These solve real problems but come with complexity – make sure the performance benefit justifies the added cognitive load.

# Understand useState behaviour

useState has some behaviours that matter for performance. Knowing them lets you avoid unnecessary work and unnecessary anti-patterns.

Don’t guard state updates

You don’t need to check whether the value has changed before calling a setter:

// Unnecessary: useState already does this check
if (value !== newValue) {
  setValue(newValue);
}

// Just do this
setValue(newValue);

React’s useState implementation already uses Object.is to compare the new value with the current one. If they’re the same, it bails out. Your manual check adds nothing – though it does no harm either.

There’s a catch, though: even when the value hasn’t changed, React may still call your component function once before bailing out. This happens because React can only do an “eager” comparison when it’s certain the update queue is empty. If there’s any ambiguity, it speculatively renders the component, checks whether the output differs, and then bails out before reconciling children or committing to the DOM.

In practice this means setState(sameValue) is not a complete no-op. Your component function body still executes – including any expensive inline logic that isn’t wrapped in useMemo. You’ll see the component briefly appear in the profiler even though nothing visibly changed. The children won’t re-render and the DOM won’t update, but the function call still happens.

Use functional updates to stabilise callbacks

This is a common pattern that defeats its own purpose:

const Modal = () => {
  const [isOpen, setIsOpen] = useState(false);
  const toggle = useCallback(
    () => setIsOpen(!isOpen),
    [isOpen]
  );
  return <ModalContent onClick={toggle} show={isOpen} />;
};

The intention is to cache toggle so that ModalContent (assuming it’s memoised) doesn’t re-render unnecessarily. But toggle depends on isOpen, so it’s recreated every time isOpen changes – which is every time it’s clicked. The memoisation is doing nothing useful.

The fix is to use a functional update. The setter provides the current value as an argument:

const Modal = () => {
  const [isOpen, setIsOpen] = useState(false);
  const toggle = useCallback(
    () => setIsOpen(prev => !prev),
    []
  );
  return <ModalContent onClick={toggle} show={isOpen} />;
};

Now toggle has no dependencies, so it’s created once and never changes. The functional update reads the current state at call time, so it’s always correct.

Lazy initialisation

If the initial state requires an expensive computation, pass a function to useState:

// Bad: parseData runs on every render (result is ignored after the first)
const [data, setData] = useState(parseData(rawData));

// Good: parseData runs only on mount
const [data, setData] = useState(() => parseData(rawData));

Without the function wrapper, parseData(rawData) runs every render – React just throws away the result after the first. With the function wrapper, React only calls it during the initial mount.

# Store non-render state in refs

Sometimes you need to track a value that shouldn’t trigger a re-render when it changes. In class components, you’d write to an instance property. In function components, useRef fills that role.

A common case: tracking hover state for a click handler.

const ListItem = ({ item, onClick }) => {
  const isHovered = useRef(false);

  const handleClick = useCallback(() => {
    onClick(item.id, { wasHovered: isHovered.current });
  }, [item.id, onClick]);

  return (
    <div
      onClick={handleClick}
      onMouseEnter={() => { isHovered.current = true; }}
      onMouseLeave={() => { isHovered.current = false; }}
    >
      {item.name}
    </div>
  );
};

If you used useState for isHovered, every mouse enter and leave would trigger a re-render – potentially dozens of renders as the user moves their cursor through a list. Using useRef, the value updates silently and the component only re-renders when actual props change.

Another common use: keeping a callback in sync with the latest props without adding it as a dependency.

const Autosave = ({ onSave, data }) => {
  const onSaveRef = useRef(onSave);
  onSaveRef.current = onSave;

  useEffect(() => {
    const id = setInterval(() => {
      onSaveRef.current(data);
    }, 30000);
    return () => clearInterval(id);
  }, [data]);
};

Without the ref, you’d either need onSave in the effect’s dependency array – causing the interval to reset every time the parent re-renders – or risk calling a stale callback. The ref gives you a stable reference to always-current behaviour.

The general rule: if a value influences what the component renders, it belongs in state. If it only influences behaviour (event handlers, timers, imperative logic), a ref is a better fit.

# Understand update batching

When you call multiple state setters in quick succession, React can batch them into a single re-render. Whether it does depends on your React version.

Before React 18

React only batched updates inside its own event handlers:

const Search = () => {
  const [query, setQuery] = useState("");
  const [results, setResults] = useState([]);

  // Batched: one re-render (React event handler)
  const handleSubmit = () => {
    setQuery(input);
    setResults(filtered);
  };

  // NOT batched: two re-renders
  const handleFetch = async () => {
    const data = await fetch("/api/search");
    const json = await data.json();
    setResults(json.items);   // render 1
    setQuery(json.query);     // render 2
  };

  return <button onClick={handleSubmit}>Search</button>;
};

Inside handleSubmit, both setters run synchronously within a React event handler, so React batches them. Inside handleFetch, the setters run after an await – outside React’s synchronous execution context – so each one triggers a separate render.

The same split applied to setTimeout, setInterval, and native DOM event listeners. Anything outside React’s synchronous event handling was unbatched.

React 18: automatic batching

React 18 introduced automatic batching. All state updates are batched regardless of where they originate – promises, timeouts, native event handlers, anything. The handleFetch example above now causes one re-render instead of two.

If you’re on React 18+ and profiling shows double-renders from sequential state updates, something unusual is going on. Check whether you’re inadvertently opting out of batching with flushSync (which forces immediate synchronous rendering) or whether you’re hitting a subtle edge case with external state management.

When batching matters

The impact of unbatched updates depends on what the component does. Two re-renders of a lightweight component is rarely noticeable. Two re-renders of a component that triggers expensive child renders, data fetches, or DOM measurements can be a real problem. If you’re stuck on React <18 and this is causing issues, you can combine the state updates into a single call using useReducer or by merging the values into one state object.

# Optimise context usage

Context is convenient, but every change to a context value re-renders every component that consumes that context. There’s no built-in way to subscribe to just part of a context value, and React traverses the entire provider subtree to find consumers on each update – there’s no cached subscriber list.

This means a context holding a large state object can cause widespread unnecessary re-renders when any part of that state changes.

Separate contexts by change frequency

The most effective mitigation is to split one big context into several smaller ones:

// Bad: changing the theme re-renders every component that reads user info
const AppContext = React.createContext({ user: null, theme: "light" });

// Good: theme changes don't affect user consumers
const UserContext = React.createContext(null);
const ThemeContext = React.createContext("light");

Group values that change together into one context, and values that change independently into separate contexts.

Stabilise object references

If your context value is an object, make sure you’re not creating a new object every render:

// Bad: new object on every render of the provider
const AuthProvider = ({ children }) => {
  const [user, setUser] = useState(null);
  return (
    <AuthContext.Provider value={{ user, setUser }}>
      {children}
    </AuthContext.Provider>
  );
};

// Good: stable object reference
const AuthProvider = ({ children }) => {
  const [user, setUser] = useState(null);
  const value = useMemo(() => ({ user, setUser }), [user]);
  return (
    <AuthContext.Provider value={value}>
      {children}
    </AuthContext.Provider>
  );
};

Without useMemo, the provider creates a new { user, setUser } object every render, which React treats as a context change even when user hasn’t actually changed.

Keep consumers small

When a consumer must re-render, make it fast. A common pattern is to split the context-reading logic from the rendering:

const UserBadge = React.memo(({ name, avatar }) => (
  <div className="badge">
    <img src={avatar} /> {name}
  </div>
));

const UserBadgeContainer = () => {
  const { name, avatar } = useContext(UserContext);
  return <UserBadge name={name} avatar={avatar} />;
};

UserBadgeContainer re-renders on every context change, but it’s cheap – it just reads two values and passes them to UserBadge. The expensive rendering in UserBadge is memoised and only runs when name or avatar actually change.

When to reach for external state management

If your context holds complex, frequently-changing state with many consumers that each care about different parts of it, you’re fighting against Context’s design. Libraries like Zustand, Jotai, or Redux Toolkit let components subscribe to specific slices of state, avoiding the “everyone re-renders on every change” problem entirely.

# Use transitions for non-urgent updates React 18+

React 18 introduced useTransition and useDeferredValue to let you mark certain updates as non-urgent. The idea: keep the UI responsive for the thing the user is actively doing (typing, clicking) while deferring expensive secondary updates.

useTransition

Wrapping a state update in startTransition tells React it’s OK to interrupt that update if something more urgent comes in:

const Search = ({ items }) => {
  const [query, setQuery] = useState("");
  const [filtered, setFiltered] = useState(items);
  const [isPending, startTransition] = useTransition();

  const handleChange = (e) => {
    setQuery(e.target.value);             // urgent: keep input responsive
    startTransition(() => {
      setFiltered(items.filter(i =>       // non-urgent: can be deferred
        i.name.includes(e.target.value)
      ));
    });
  };

  return (
    <div>
      <input value={query} onChange={handleChange} />
      {isPending && <span>Filtering...</span>}
      <ItemList items={filtered} />
    </div>
  );
};

Without the transition, every keystroke updates both the input and triggers the expensive filter synchronously. If filtering is slow, the input feels laggy. With the transition, React prioritises the input update and renders the filtered results when it has time.

useDeferredValue

useDeferredValue achieves a similar effect from the consumer side. It returns a deferred version of a value that “lags behind” during urgent updates:

const Search = ({ items }) => {
  const [query, setQuery] = useState("");
  const deferredQuery = useDeferredValue(query);

  const filtered = useMemo(
    () => items.filter(i => i.name.includes(deferredQuery)),
    [items, deferredQuery]
  );

  return (
    <div>
      <input value={query} onChange={e => setQuery(e.target.value)} />
      <ItemList items={filtered} />
    </div>
  );
};

During rapid typing, query updates immediately (keeping the input responsive) while deferredQuery updates with a delay, batching the expensive filter.

When to use which

Use useTransition when you control the state update and want to explicitly mark it as non-urgent. Use useDeferredValue when you receive a value from above (props, context) and want to defer your response to changes in it.

Neither is a substitute for making your renders faster. They’re best used alongside other optimisations when you have an update that’s inherently expensive and the user experience requires responsiveness during that update.

# Pass useReducer dispatch instead of callbacks

React guarantees that the dispatch function from useReducer has a stable identity across renders. When you build derived update functions on top of useState setters — like setName and setEmail in the example below — each one needs useCallback and careful dependency management. With useReducer, dispatch is a single stable function that handles all updates — you can safely omit it from dependency arrays and pass it through context without memoisation gymnastics.

This matters when you have a parent that provides update functions to many descendants:

// Fragile: multiple callbacks that need useCallback + dependency management
const FormContext = React.createContext(null);

const FormProvider = ({ children }) => {
  const [form, setForm] = useState({ name: "", email: "" });

  const setName = useCallback(v => setForm(f => ({ ...f, name: v })), []);
  const setEmail = useCallback(v => setForm(f => ({ ...f, email: v })), []);

  const value = useMemo(() => ({ form, setName, setEmail }), [form, setName, setEmail]);

  return <FormContext.Provider value={value}>{children}</FormContext.Provider>;
};

Compare with useReducer:

const formReducer = (state, action) => {
  switch (action.type) {
    case "SET_FIELD":
      return { ...state, [action.field]: action.value };
    default:
      return state;
  }
};

const FormProvider = ({ children }) => {
  const [form, dispatch] = useReducer(formReducer, { name: "", email: "" });
  const value = useMemo(() => ({ form, dispatch }), [form]);
  return <FormContext.Provider value={value}>{children}</FormContext.Provider>;
};

// In a child component:
dispatch({ type: "SET_FIELD", field: "name", value: "Tom" });

dispatch never changes, so it never triggers re-renders in memoised consumers. You don’t need useCallback wrappers. You don’t need to worry about stale closures. The reducer itself is defined outside the component, so there’s no closure over state at all.

This pattern scales much better than passing individual useState setters through props or context — especially when the number of update actions grows.

# Consider hiding instead of unmounting

The usual React pattern for showing and hiding UI is conditional rendering:

{isVisible && <Panel />}

This unmounts Panel when isVisible is false and mounts a fresh one when it becomes true again. Mounting a component means running all its setup: initial renders of the entire subtree, effects, data fetching, DOM measurements, and so on.

If Panel is expensive to mount and the user frequently toggles it, consider keeping it in the DOM and hiding it with CSS instead:

<div style={{ display: isVisible ? "block" : "none" }}>
  <Panel />
</div>

Now Panel mounts once and stays mounted. Toggling visibility is just a CSS change – essentially free.

Tradeoffs

This isn’t always the right call. Keeping components mounted means:

  • They stay in memory. State, refs, DOM nodes – everything persists. For a single panel this is fine. For a list of a hundred hidden items, it may not be.
  • Effects keep running. Subscriptions, timers, and other effects set up in useEffect continue to execute. You may need to check visibility before doing work, which adds complexity.
  • Initial mount is more expensive. You’re mounting everything up front rather than deferring until it’s needed.

The pattern works well for UI that’s toggled frequently and is expensive to initialise: modals, tabs, sidebars, accordion panels. It’s less appropriate for content that’s rarely shown or that has significant ongoing side effects.

Tabs as a case study

Tabbed interfaces are a classic candidate. Without hiding:

{activeTab === "chart" && <Chart data={data} />}
{activeTab === "table" && <Table data={data} />}

Switching tabs unmounts and remounts the content. If Chart does expensive layout calculations on mount, the user feels it every time they switch back. With the hiding approach, both tabs mount once and switching is instant.

# Virtualise long lists

If you’re rendering hundreds or thousands of items in a list, rendering all of them is wasteful – the user can only see a screenful at a time. Virtualisation (or “windowing”) renders only the items currently visible in the viewport, plus a small buffer above and below.

Libraries like react-window and TanStack Virtual handle this:

import { useVirtualizer } from "@tanstack/react-virtual";

const VirtualList = ({ items }) => {
  const parentRef = useRef(null);
  const virtualizer = useVirtualizer({
    count: items.length,
    getScrollElement: () => parentRef.current,
    estimateSize: () => 40,
  });

  return (
    <div ref={parentRef} style={{ height: 400, overflow: "auto" }}>
      <div style={{ height: virtualizer.getTotalSize() }}>
        {virtualizer.getVirtualItems().map(virtualRow => (
          <div
            key={virtualRow.key}
            style={{
              position: "absolute",
              top: virtualRow.start,
              height: virtualRow.size,
              width: "100%",
            }}
          >
            {items[virtualRow.index].name}
          </div>
        ))}
      </div>
    </div>
  );
};

A list of 10,000 items will only ever render perhaps 20–30 DOM nodes. Scrolling replaces the content of those nodes rather than creating new ones.

When it’s worth it

Virtualisation adds complexity – scroll position management, dynamic sizing, keyboard navigation, and accessibility all need extra work. For a list of 50 items, just render them all. For 500+, or for items with expensive render logic, virtualisation can transform the experience.

The two main signals that you need it:

  • Initial render is slow because you’re mounting thousands of components
  • Scrolling is janky because the browser is painting and laying out a massive DOM tree

Deep Cuts

Situational techniques that depend on your stack, architecture, or React version. Not every app needs these, but when they apply, they can make a significant difference.

# Start fetches before components mount

The standard React data-fetching pattern starts the fetch inside useEffect, which means the component has to mount first:

User clicks link
  → Router renders new route
    → Component mounts
      → useEffect fires
        → Fetch starts
          → Data arrives
            → setState triggers re-render
              → UI updates

Every step in that chain is sequential. The fetch can’t start until the component mounts, the component can’t mount until React finishes rendering the route, and the route can’t render until the previous commit is done. In deeply nested trees, this can add 50–250ms of dead time before the network request even leaves the browser.

The “render-as-you-fetch” pattern starts the fetch at the point of user interaction — typically at the router level — before the consuming component exists:

// Start fetch immediately on route change
const loader = () => {
  return { dataPromise: fetchUserProfile() };
};

// Component consumes the already-in-flight promise
const Profile = () => {
  const { dataPromise } = useLoaderData();
  const data = use(dataPromise);
  return <ProfileView user={data} />;
};

Now the fetch and the component render happen in parallel rather than sequentially. By the time the component mounts, the data may already be available.

Most modern data-fetching libraries support this. TanStack Query has prefetchQuery, which you can call from route loaders or event handlers. SWR has preload. React Router and TanStack Router both support loader functions that run before route components mount.

The key insight is that data fetching is inherently a side effect of navigation, not of component mounting. Treating it as a mount side effect creates an artificial dependency chain that adds latency.

This pattern matters most for routes with large component trees and slow API calls. For components that render quickly and fetch fast endpoints, the waterfall is barely noticeable.

# Runtime CSS-in-JS has a render cost

Runtime CSS-in-JS libraries like styled-components and Emotion generate styles during the JavaScript render phase. Each styled component creates a Context.Consumer internally, resolves its styles from props and theme, serialises them into CSS, and injects them into the document head. This happens every render.

Benchmarks consistently show styled-components renders taking roughly 2x longer than equivalent components using plain CSS classes. The overhead comes from three sources: style computation during render, context consumption per styled component, and dynamic stylesheet injection.

In a component that renders once and rarely updates, this doesn’t matter. In a list of 500 items that re-renders on scroll, or a form with many styled inputs that updates on every keystroke, it’s measurable.

// Runtime: style computation happens during render
const Button = styled.button`
  background: ${props => props.primary ? "#1de9b6" : "#eee"};
  padding: 8px 16px;
`;

// Compile-time: CSS is extracted at build time, zero runtime cost
// (using vanilla-extract, Linaria, Panda CSS, etc.)
import { button } from "./styles.css";
<button className={button({ primary: true })}>Click</button>

Compile-time CSS-in-JS alternatives extract static CSS at build time. The browser loads a regular stylesheet; no JavaScript runs to generate or inject styles. You get the same developer experience (co-located styles, type safety, theming) without the runtime cost.

If you’re starting a new project, consider a compile-time solution. If you’re stuck with runtime CSS-in-JS and seeing render performance issues, profile your styled components — you may find that the styling layer is a significant contributor.

# Use useEffectEvent to read latest values in effects React 19.2+

Effects that need to read the latest props or state without re-synchronising when those values change have always been awkward in React. The common workaround is the ref pattern:

// The workaround: manually keeping a ref in sync
const onSaveRef = useRef(onSave);
onSaveRef.current = onSave;

useEffect(() => {
  const id = setInterval(() => onSaveRef.current(data), 5000);
  return () => clearInterval(id);
}, [data]); // interval restarts when data changes

This works but it’s verbose, error-prone, and easy to get wrong. useEffectEvent is the official solution:

const onTick = useEffectEvent(() => {
  onSave(data);
});

useEffect(() => {
  const id = setInterval(onTick, 5000);
  return () => clearInterval(id);
}, []); // no dependencies: interval is set up once and stays stable

useEffectEvent returns a function that always reads the latest values of everything it closes over, but it isn’t a reactive dependency — the effect won’t re-run when onSave or data change. The interval is set up once and stays stable, while the function it calls always has access to current values.

The performance benefit is that your effects no longer need to tear down and set up subscriptions, timers, or connections just because a callback prop changed. Without useEffectEvent, adding onSave to the dependency array would restart the interval every time the parent re-renders with a new function reference — which, unless every ancestor is carefully memoised, is every render.

The rule of thumb: if your effect needs to do something with a value (read it) but shouldn’t react to changes in that value, wrap that logic in useEffectEvent.

# Pre-render hidden views with <Activity> React 19.2+

React 19.2 ships an <Activity> component (the long-awaited “Offscreen” API) that lets you pre-render content the user is likely to navigate to, without the tradeoffs of either unmounting or display: none.

<Activity mode={tab === "comments" ? "visible" : "hidden"}>
  <Comments />
</Activity>
<Activity mode={tab === "settings" ? "visible" : "hidden"}>
  <Settings />
</Activity>

When mode is "hidden", React keeps the component tree in memory but detaches its DOM nodes, unmounts its effects, and defers all pending updates until React has nothing higher-priority left to work on. When mode switches to "visible", React reattaches the DOM and remounts effects.

This is fundamentally different from the two existing approaches:

  • Conditional rendering ({show && <Panel />}) destroys everything on hide and rebuilds from scratch on show. Fast to hide, expensive to show.
  • CSS hiding (display: none) keeps everything mounted and running — effects, subscriptions, timers all stay active. Fast to toggle, but you pay the ongoing cost of a fully-alive component tree.

<Activity> gives you the best of both: state and DOM are preserved (so scroll positions, form inputs, and component state survive round-trips), but effects and subscriptions are cleaned up while hidden. When the user switches tabs, the content is already rendered — it just needs its effects remounted, which is much cheaper than a full mount.

The main use case is tab-based or route-based navigation where the user frequently switches between views. Pre-rendering the next likely destination eliminates the mount cost entirely when they get there.

# Debugging: detect what changed

The React DevTools profiler will tell you that a component rendered, and sometimes why. But when its explanation is wrong or incomplete – and it often is – you need to find the actual cause yourself.

These snippets log exactly which props or state values changed between renders, which is usually enough to identify the culprit.

Function component

function useWhyDidYouRender(name, props) {
  const prev = useRef(props);
  useEffect(() => {
    const changes = {};
    for (const key of Object.keys({ ...prev.current, ...props })) {
      if (prev.current[key] !== props[key]) {
        changes[key] = { from: prev.current[key], to: props[key] };
      }
    }
    if (Object.keys(changes).length > 0) {
      console.log(`[${name}] changed:`, changes);
    }
    prev.current = props;
  });
}

// Usage
const MyComponent = (props) => {
  useWhyDidYouRender("MyComponent", props);
  // ...
};

Class component

componentDidUpdate(prevProps, prevState) {
  for (const key of Object.keys(this.props)) {
    if (prevProps[key] !== this.props[key]) {
      console.log(`[prop] ${key} changed:`, prevProps[key], "→", this.props[key]);
    }
  }
  for (const key of Object.keys(this.state)) {
    if (prevState[key] !== this.state[key]) {
      console.log(`[state] ${key} changed:`, prevState[key], "→", this.state[key]);
    }
  }
}

What to look for

The output will typically reveal one of these:

  • A function prop has changed – a new function is being created each render. See referential equality.
  • An object or array prop has changed – the parent is creating a new object with the same values. See moving constants out.
  • A prop you didn’t expect to change is changing – follow it back up the component tree. Something further up is recreating it unnecessarily.
  • No props changed at all – the component is re-rendering because its parent re-rendered. See composition patterns or React.memo.

Drop these into the component you’re investigating, reproduce the issue, and read the console. It’s low-tech, but it’s more reliable than the profiler for pinpointing reference changes.

# The React Compiler React 19+

The React Compiler (formerly “React Forget”) is a build-time tool that automatically adds memoisation to your components. It analyses your code and inserts the equivalent of React.memo, useMemo, and useCallback where it determines they’re safe and beneficial.

This means many of the manual techniques in this guide – stabilising object references, wrapping callbacks in useCallback, memoising computed values – are handled for you.

What it does

The compiler analyses each component’s data flow at build time and generates code that:

  • Caches JSX elements, objects, and arrays that don’t need to change between renders
  • Stabilises function references that close over unchanged values
  • Skips re-rendering child components whose props haven’t changed

It follows the Rules of React – components must be pure functions of their props and state, hooks must be called unconditionally, and so on. If your code already follows these rules (and it should), the compiler can optimise it aggressively.

What it doesn’t do

The compiler doesn’t help with:

  • Structural problems. If your component tree is shaped such that a state change at the top re-renders everything, the compiler will memoise individual components but can’t restructure your tree for you. Composition patterns still matter.
  • Expensive computations. The compiler caches results based on input identity. If inputs change every render, caching doesn’t help – you need to find a way to stabilise the inputs or move the computation.
  • Context re-renders. A context change still triggers re-renders in all consumers. The compiler can memoise the rendering within each consumer, but can’t prevent the render from being triggered.
  • Code that breaks the rules. Components with side effects during render, mutated props, or conditionally-called hooks will either be skipped by the compiler or produce incorrect behaviour.

Should you stop writing manual memoisation?

If you’re using the compiler: mostly, yes. Remove useMemo, useCallback, and React.memo calls that exist purely for performance – the compiler handles them better because it can reason about the full data flow. Keep them only when they serve a semantic purpose (e.g., useMemo to maintain a stable reference that’s used as a dependency elsewhere).

If you’re not using the compiler yet, the manual techniques in this guide remain your primary tools. The compiler is a build step and won’t help you without explicit adoption.