React Performance Optimisation: Patterns That Actually Matter
React performance is a topic surrounded by bad advice. Developers reach for useMemo, useCallback, and React.memo by default, not realising these have their own overhead. This article is about what actually moves the needle: measured, targeted optimisation rather than superstition.
Rule 0: Profile Before You Optimise
Never optimise blind. Use React DevTools Profiler to identify what is actually slow before writing a single memo.
- Open Chrome DevTools → Components → Profiler tab
- Record an interaction (click, scroll, type)
- Look for components that re-render unexpectedly or take long to render
- Target only those, not everything
The biggest performance gains come from fixing architecture, not from sprinkling memo everywhere.
Understanding Why React Re-renders
React re-renders a component when:
- Its own state changes
- Its parent re-renders (even if props haven't changed)
- A context it consumes changes
Most "performance problems" in React are simply unnecessary re-renders from #2. Understanding this focuses your optimisation effort.
React.memo — Wrap Expensive Children, Not Everything
React.memo skips re-rendering a component when its props haven't changed (by shallow comparison). Use it only when:
- The component is expensive to render
- It re-renders frequently with the same props
// A chart that runs heavy D3 computation — good candidate for memo
const ArticleChart = React.memo(function ArticleChart({
data,
width,
height,
}: {
data: DataPoint[];
width: number;
height: number;
}) {
// expensive SVG rendering
return <svg width={width} height={height}>...</svg>;
});
React.memo does a shallow comparison of props. Objects and arrays created inline will always fail this comparison:
// memo is useless here — new array created on every parent render
<ArticleChart data={[...articles]} />
// Fix — memoize the array in the parent
const chartData = useMemo(() => articles.map(toDataPoint), [articles]);
<ArticleChart data={chartData} />
useMemo — For Expensive Computations Only
useMemo caches the result of a computation between renders.
// No point — object creation is cheap, memo overhead may cost more
const style = useMemo(() => ({ color: "red" }), []);
// Good use — expensive computation that only needs to run when articles change
const grouped = useMemo(() => {
return articles.reduce<Record<string, Article[]>>((acc, article) => {
const topic = article.topic ?? "Other";
acc[topic] = [...(acc[topic] ?? []), article];
return acc;
}, {});
}, [articles]);
The rules for useMemo:
- Measure first. If the computation is fast,
useMemoadds more overhead than it saves - Only worthwhile if the result is used by many consumers or a
memo-wrapped child
useCallback — For Stable Function References
useCallback returns the same function reference between renders. This matters when passing a function to:
- A
React.memo-wrapped component (so the function reference doesn't break memo) - A
useEffectdependency array (so the effect doesn't re-run needlessly)
function ArticleList({ topic }: { topic: string }) {
const [articles, setArticles] = useState<Article[]>([]);
// Stable reference — won't cause ArticleItem's memo to break
const handleDelete = useCallback(async (id: string) => {
await api.articles.delete(id);
setArticles(prev => prev.filter(a => a.id !== id));
}, []); // empty deps — no closure over changing values
return (
<ul>
{articles.map(a => (
<ArticleItem key={a.id} article={a} onDelete={handleDelete} />
))}
</ul>
);
}
const ArticleItem = React.memo(function ArticleItem({
article,
onDelete,
}: {
article: Article;
onDelete: (id: string) => void;
}) {
return (
<li>
{article.title}
<button onClick={() => onDelete(article.id)}>Delete</button>
</li>
);
});
Without useCallback, handleDelete is a new function on every render → React.memo on ArticleItem is defeated.
Code Splitting with React.lazy and Suspense
Ship less JavaScript on initial load by splitting rarely-used routes or heavy components into separate chunks.
import { lazy, Suspense } from "react";
const ArticleEditor = lazy(() => import("./ArticleEditor"));
const Analytics = lazy(() => import("./Analytics"));
function App() {
return (
<Suspense fallback={<div>Loading...</div>}>
<Routes>
<Route path="/edit/:id" element={<ArticleEditor />} />
<Route path="/analytics" element={<Analytics />} />
</Routes>
</Suspense>
);
}
Each lazy import creates a separate chunk. The browser only downloads ArticleEditor when the user navigates to /edit/.... This is one of the highest-impact performance changes available.
Virtualise Long Lists
Rendering 1000 DOM nodes is expensive. For long lists, only render what is visible on screen.
npm install @tanstack/react-virtual
import { useVirtualizer } from "@tanstack/react-virtual";
import { useRef } from "react";
function VirtualArticleList({ articles }: { articles: Article[] }) {
const parentRef = useRef<HTMLDivElement>(null);
const virtualizer = useVirtualizer({
count: articles.length,
getScrollElement: () => parentRef.current,
estimateSize: () => 80, // estimated row height in px
});
return (
<div ref={parentRef} style={{ height: "600px", overflow: "auto" }}>
<div style={{ height: `${virtualizer.getTotalSize()}px`, position: "relative" }}>
{virtualizer.getVirtualItems().map(item => (
<div
key={item.key}
style={{
position: "absolute",
top: 0,
left: 0,
width: "100%",
height: `${item.size}px`,
transform: `translateY(${item.start}px)`,
}}
>
<ArticleRow article={articles[item.index]} />
</div>
))}
</div>
</div>
);
}
A list of 10,000 items renders in the same time as a list of 20 items.
Debounce Expensive Handlers
Firing an expensive operation (search API call, filter computation) on every keystroke is wasteful. Debounce it.
import { useState, useCallback } from "react";
import { useDebouncedCallback } from "use-debounce";
function SearchBar({ onSearch }: { onSearch: (q: string) => void }) {
const [value, setValue] = useState("");
const debouncedSearch = useDebouncedCallback((query: string) => {
onSearch(query);
}, 300);
return (
<input
value={value}
onChange={e => {
setValue(e.target.value); // instant UI update
debouncedSearch(e.target.value); // delayed API call
}}
placeholder="Search..."
/>
);
}
Avoid Anonymous Inline Functions in JSX When They Matter
Inline functions in JSX create a new reference on every render. For components wrapped in memo, this defeats the optimisation:
// Breaks memo on ExpensiveChild — new function every render
<ExpensiveChild onClick={() => handleClick(item.id)} />
// Use useCallback or a named handler bound to item
const handleItemClick = useCallback(() => handleClick(item.id), [item.id]);
<ExpensiveChild onClick={handleItemClick} />
For non-memoised components, inline functions are fine. React is fast enough.
Use the Correct Key Prop on Lists
React uses key to determine which list items changed. A wrong key causes unnecessary unmount/remount cycles.
// Wrong — index as key breaks reconciliation when list order changes
{articles.map((a, index) => <ArticleCard key={index} article={a} />)}
// Correct — stable, unique ID
{articles.map(a => <ArticleCard key={a.id} article={a} />)}
Optimise Images
Use native lazy loading and explicit dimensions to prevent layout shift:
<img
src={article.headerImage}
alt={article.title}
width={800}
height={400}
loading="lazy" // native browser lazy loading
decoding="async"
/>
For Next.js use next/image. For Vite + React apps, consider @unpic/react for automatic responsive images.
Conclusion
React performance optimisation follows a simple rule: measure first, fix what hurts. Most apps need code splitting more than they need useMemo. Most lists need virtualisation before they need React.memo. Profile with React DevTools, identify the actual bottlenecks, and apply targeted fixes. The patterns that consistently move the needle are: lazy-loaded routes, virtualised lists, debounced handlers, and stable prop references for memoised children.
References: