4 Mendix performance killers and how to solve them
05/12/2025
Your Mendix app works, but it's slow. And slowness kills adoption. When a dashboard takes 25 seconds to load, users don't wait. They close the browser and go back to Excel and your carefully designed solution becomes shelfware

Performance problems rarely show up during development. With clean test data and local hosting, everything feels fast enough. Then production hits, and what seemed fine suddenly isn't.
These challenges are common across Mendix projects and the good news is they we spotted some recurring issues. The patterns repeat: domain models that don't scale, microflows doing too much work, pages that load everything at once, and queries that bypass database optimization. We’ll share them in this blog with you!
The 4 Mendix performance killers
Killer #1: Domain model design
Your domain model is the foundation. When it's designed without performance in mind, every query, every screen, every report inherits that slowness.
During modeling, it's tempting to prioritize flexibility over efficiency. "We might need this later" becomes a reason to add another calculated attribute or event handler. But each choice has a cost and that cost compounds when you're working with real data volumes.
What slows you down:
Missing indexes on frequently queried attributes.
If you're filtering or sorting on an attribute in XPath, and it's not indexed, the database scans every row. With 10,000 records, that's noticeable. With 100,000, it's painful.
→ Fix: Index the attributes you query often, it's the easiest performance win you'll get.
Calculated attributes in data grids.
They look clean in your domain model, but they're computed on-the-fly for every row, every time the page refreshes. Load 500 records with a calculated attribute? That's 500 calculations. Every. Single. Time.
→ Fix: Store the value as a regular attribute and update it only when the underlying data changes.
Before and After Commit event handlers.
Sometimes you need them, but if your BCO or ACO logic is heavy, and you're committing objects in a loop, you're compounding the problem exponentially.
→ Fix: Move validation or enrichment logic outside the commit cycle where possible, or batch updates to minimize handler calls.
Killer #2: Microflow patterns
Microflows are where business logic lives and where performance problems quietly accumulate. A microflow that works fine with 50 records can break down completely under 5,000.
The issue is often subtle. Developers build logic that's correct but inefficient—it does what it's supposed to do, but in a way that doesn't scale.
What slows you down:
Over-retrieving data.
Pulling 10,000 records when you only need 50 wastes memory and processing time.
→ Fix: Always use XPath constraints to filter at the database level. Don't retrieve everything at once and filter in-memory.
Loops with database actions inside.
This is the classic killer. If you're committing or retrieving objects inside a loop, you're making a separate database call for every iteration. One loop with 1,000 items? That's 1,000 round-trips.
→ Fix: Batch your operations. Retrieve what you need upfront, process it in-memory, and commit once outside the loop.
Not using task queues for heavy processing.
Long-running imports, report generation, batch updates, … If these block the user, you're creating a bad experience and risking timeouts.
→ Fix: Push heavy tasks to task queues. Let them run asynchronously and give users a refresh mechanism or notification when done.
Killer #3: Client-side performance
Your app might be fast on the server, but if the browser struggles to render the page, users won't notice the difference. Client-side performance matters just as much—and it's often overlooked.
Mendix makes it easy to build rich UIs quickly, but that convenience can hide performance costs. Nested containers, heavy widgets, and large datasets all add up - and users feel every millisecond.
What slows you down:
Nested data views and list views.
Every nested container triggers its own data retrieval. A page with three levels of nesting can easily turn into a dozen separate database queries.
→ Fix: Limit nesting depth. Flatten your page structure where possible, or consolidate data retrieval higher up.
Large datasets without pagination.
Loading 5,000 rows at once isn't just slow—it crashes browsers on weaker devices.
→ Fix: Always paginate. Always. Even if you "only" have a few thousand records now, plan for growth.
Microflow data sources with heavy calculations.
If your data source retrieves all objects upfront and does per-row calculations, the page grinds to a halt. This is especially painful on dashboards.
→ Fix: Move logic server-side, pre-calculate values, or push the work into the database entirely.
Killer #4: XPath query optimization
XPath is how you query data in Mendix. It's powerful, flexible and easy to write badly. A poorly structured XPath query can bypass database indexes entirely, forcing full table scans even when you're only looking for a handful of records.
XPath looks simple, so it's easy to assume all queries perform equally. They don't. Small structural choices, constraint order, string operations, OR logic, make massive differences at scale.
What slows you down:
Wrong constraint order. Databases optimize best when you filter by associations first, then attributes. If you filter on an attribute before narrowing down by association, you're scanning way more rows than necessary.
→ Fix: Always structure your constraints: associations first, then attributes.
Expensive string operations. The contains function can't use indexes effectively—it has to scan every value. If possible, use starts-with instead.
→ Fix: Where exact matches or prefix matches work, use them. Reserve contains for cases where there's no alternative.
OR operators. Databases struggle with OR in queries, they can't leverage a single index efficiently. Writing [Status = 'A' or Status = 'B'] often bypasses optimization entirely.
→ Fix: When possible, split OR logic into separate queries and combine the results. It's more verbose, but often faster.
Knowing these four killers is one thing, seeing them combine in a real project is another.
Want to see these killers in action? We worked on a project where all four showed up at once- domain model issues, inefficient microflows, client-side overload, and slow XPath queries. The dashboard took 25 seconds to load and users gave up.
We fixed it in one go using OQL view entities. Load time: 1.5 seconds. Read it all in our next blog.
Dealing with slow dashboards or frustrated users yourself? We've fixed this before and we know exactly what to look for.
Get a performance audit. We'll show you what's slowing your app down and how to fix it.